00:00:00.002 Started by upstream project "autotest-per-patch" build number 126242 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.050 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.051 The recommended git tool is: git 00:00:00.051 using credential 00000000-0000-0000-0000-000000000002 00:00:00.054 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.077 Fetching changes from the remote Git repository 00:00:00.079 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.108 Using shallow fetch with depth 1 00:00:00.108 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.108 > git --version # timeout=10 00:00:00.142 > git --version # 'git version 2.39.2' 00:00:00.142 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.176 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.176 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.004 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.014 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.025 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:05.025 > git config core.sparsecheckout # timeout=10 00:00:05.035 > git read-tree -mu HEAD # timeout=10 00:00:05.049 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:05.067 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:05.067 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:05.142 [Pipeline] Start of Pipeline 00:00:05.154 [Pipeline] library 00:00:05.155 Loading library shm_lib@master 00:00:05.155 Library shm_lib@master is cached. Copying from home. 00:00:05.172 [Pipeline] node 00:00:05.181 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.182 [Pipeline] { 00:00:05.191 [Pipeline] catchError 00:00:05.193 [Pipeline] { 00:00:05.204 [Pipeline] wrap 00:00:05.212 [Pipeline] { 00:00:05.217 [Pipeline] stage 00:00:05.219 [Pipeline] { (Prologue) 00:00:05.456 [Pipeline] sh 00:00:05.740 + logger -p user.info -t JENKINS-CI 00:00:05.759 [Pipeline] echo 00:00:05.760 Node: CYP9 00:00:05.769 [Pipeline] sh 00:00:06.073 [Pipeline] setCustomBuildProperty 00:00:06.086 [Pipeline] echo 00:00:06.088 Cleanup processes 00:00:06.092 [Pipeline] sh 00:00:06.373 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.373 2431027 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.387 [Pipeline] sh 00:00:06.673 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.673 ++ grep -v 'sudo pgrep' 00:00:06.673 ++ awk '{print $1}' 00:00:06.673 + sudo kill -9 00:00:06.673 + true 00:00:06.684 [Pipeline] cleanWs 00:00:06.692 [WS-CLEANUP] Deleting project workspace... 00:00:06.692 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.698 [WS-CLEANUP] done 00:00:06.701 [Pipeline] setCustomBuildProperty 00:00:06.710 [Pipeline] sh 00:00:06.991 + sudo git config --global --replace-all safe.directory '*' 00:00:07.070 [Pipeline] httpRequest 00:00:07.106 [Pipeline] echo 00:00:07.108 Sorcerer 10.211.164.101 is alive 00:00:07.113 [Pipeline] httpRequest 00:00:07.117 HttpMethod: GET 00:00:07.118 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.118 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.140 Response Code: HTTP/1.1 200 OK 00:00:07.140 Success: Status code 200 is in the accepted range: 200,404 00:00:07.141 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:10.395 [Pipeline] sh 00:00:10.680 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:10.697 [Pipeline] httpRequest 00:00:10.726 [Pipeline] echo 00:00:10.728 Sorcerer 10.211.164.101 is alive 00:00:10.739 [Pipeline] httpRequest 00:00:10.744 HttpMethod: GET 00:00:10.745 URL: http://10.211.164.101/packages/spdk_a940d368120131c3cd50300e08f24a6d86433616.tar.gz 00:00:10.746 Sending request to url: http://10.211.164.101/packages/spdk_a940d368120131c3cd50300e08f24a6d86433616.tar.gz 00:00:10.760 Response Code: HTTP/1.1 200 OK 00:00:10.760 Success: Status code 200 is in the accepted range: 200,404 00:00:10.761 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a940d368120131c3cd50300e08f24a6d86433616.tar.gz 00:01:05.286 [Pipeline] sh 00:01:05.611 + tar --no-same-owner -xf spdk_a940d368120131c3cd50300e08f24a6d86433616.tar.gz 00:01:08.166 [Pipeline] sh 00:01:08.445 + git -C spdk log --oneline -n5 00:01:08.445 a940d3681 util: add spdk_read_sysfs_attribute 00:01:08.445 f604975ba doc: fix deprecation.md typo 00:01:08.445 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:01:08.445 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:01:08.445 2d30d9f83 accel: introduce tasks in sequence limit 00:01:08.456 [Pipeline] } 00:01:08.468 [Pipeline] // stage 00:01:08.475 [Pipeline] stage 00:01:08.476 [Pipeline] { (Prepare) 00:01:08.489 [Pipeline] writeFile 00:01:08.502 [Pipeline] sh 00:01:08.783 + logger -p user.info -t JENKINS-CI 00:01:08.795 [Pipeline] sh 00:01:09.076 + logger -p user.info -t JENKINS-CI 00:01:09.091 [Pipeline] sh 00:01:09.377 + cat autorun-spdk.conf 00:01:09.377 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.377 SPDK_TEST_NVMF=1 00:01:09.377 SPDK_TEST_NVME_CLI=1 00:01:09.377 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.377 SPDK_TEST_NVMF_NICS=e810 00:01:09.377 SPDK_TEST_VFIOUSER=1 00:01:09.377 SPDK_RUN_UBSAN=1 00:01:09.377 NET_TYPE=phy 00:01:09.386 RUN_NIGHTLY=0 00:01:09.391 [Pipeline] readFile 00:01:09.453 [Pipeline] withEnv 00:01:09.465 [Pipeline] { 00:01:09.502 [Pipeline] sh 00:01:09.789 + set -ex 00:01:09.789 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:09.789 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.789 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.789 ++ SPDK_TEST_NVMF=1 00:01:09.789 ++ SPDK_TEST_NVME_CLI=1 00:01:09.789 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.789 ++ SPDK_TEST_NVMF_NICS=e810 00:01:09.789 ++ SPDK_TEST_VFIOUSER=1 00:01:09.789 ++ SPDK_RUN_UBSAN=1 00:01:09.789 ++ NET_TYPE=phy 00:01:09.789 ++ RUN_NIGHTLY=0 00:01:09.789 + case $SPDK_TEST_NVMF_NICS in 00:01:09.789 + DRIVERS=ice 00:01:09.789 + [[ tcp == \r\d\m\a ]] 00:01:09.789 + [[ -n ice ]] 00:01:09.789 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:09.789 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:09.789 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:09.789 rmmod: ERROR: Module irdma is not currently loaded 00:01:09.789 rmmod: ERROR: Module i40iw is not currently loaded 00:01:09.789 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:09.789 + true 00:01:09.789 + for D in $DRIVERS 00:01:09.789 + sudo modprobe ice 00:01:09.789 + exit 0 00:01:09.799 [Pipeline] } 00:01:09.815 [Pipeline] // withEnv 00:01:09.820 [Pipeline] } 00:01:09.838 [Pipeline] // stage 00:01:09.848 [Pipeline] catchError 00:01:09.849 [Pipeline] { 00:01:09.865 [Pipeline] timeout 00:01:09.865 Timeout set to expire in 50 min 00:01:09.867 [Pipeline] { 00:01:09.881 [Pipeline] stage 00:01:09.882 [Pipeline] { (Tests) 00:01:09.897 [Pipeline] sh 00:01:10.182 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.182 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.182 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.182 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:10.182 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:10.182 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.182 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:10.182 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.182 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.182 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.182 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:10.182 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.182 + source /etc/os-release 00:01:10.182 ++ NAME='Fedora Linux' 00:01:10.182 ++ VERSION='38 (Cloud Edition)' 00:01:10.182 ++ ID=fedora 00:01:10.182 ++ VERSION_ID=38 00:01:10.182 ++ VERSION_CODENAME= 00:01:10.182 ++ PLATFORM_ID=platform:f38 00:01:10.182 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:10.182 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:10.182 ++ LOGO=fedora-logo-icon 00:01:10.182 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:10.182 ++ HOME_URL=https://fedoraproject.org/ 00:01:10.182 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:10.182 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:10.182 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:10.182 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:10.182 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:10.182 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:10.182 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:10.182 ++ SUPPORT_END=2024-05-14 00:01:10.182 ++ VARIANT='Cloud Edition' 00:01:10.182 ++ VARIANT_ID=cloud 00:01:10.182 + uname -a 00:01:10.182 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:10.182 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:13.481 Hugepages 00:01:13.481 node hugesize free / total 00:01:13.481 node0 1048576kB 0 / 0 00:01:13.481 node0 2048kB 0 / 0 00:01:13.481 node1 1048576kB 0 / 0 00:01:13.481 node1 2048kB 0 / 0 00:01:13.481 00:01:13.481 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:13.481 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:13.481 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:13.481 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:13.481 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:13.481 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:13.481 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:13.481 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:13.481 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:13.481 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:13.481 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:13.481 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:13.481 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:13.481 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:13.481 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:13.481 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:13.481 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:13.481 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:13.481 + rm -f /tmp/spdk-ld-path 00:01:13.481 + source autorun-spdk.conf 00:01:13.481 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.481 ++ SPDK_TEST_NVMF=1 00:01:13.481 ++ SPDK_TEST_NVME_CLI=1 00:01:13.481 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.481 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.481 ++ SPDK_TEST_VFIOUSER=1 00:01:13.481 ++ SPDK_RUN_UBSAN=1 00:01:13.481 ++ NET_TYPE=phy 00:01:13.481 ++ RUN_NIGHTLY=0 00:01:13.481 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:13.481 + [[ -n '' ]] 00:01:13.481 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.481 + for M in /var/spdk/build-*-manifest.txt 00:01:13.481 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:13.481 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.481 + for M in /var/spdk/build-*-manifest.txt 00:01:13.481 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:13.481 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.481 ++ uname 00:01:13.481 + [[ Linux == \L\i\n\u\x ]] 00:01:13.481 + sudo dmesg -T 00:01:13.481 + sudo dmesg --clear 00:01:13.481 + dmesg_pid=2432000 00:01:13.481 + [[ Fedora Linux == FreeBSD ]] 00:01:13.481 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.481 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.481 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:13.481 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:13.481 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:13.481 + [[ -x /usr/src/fio-static/fio ]] 00:01:13.481 + sudo dmesg -Tw 00:01:13.481 + export FIO_BIN=/usr/src/fio-static/fio 00:01:13.481 + FIO_BIN=/usr/src/fio-static/fio 00:01:13.481 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:13.481 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:13.481 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:13.481 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.481 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.481 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:13.481 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.481 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.481 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.481 Test configuration: 00:01:13.481 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.481 SPDK_TEST_NVMF=1 00:01:13.481 SPDK_TEST_NVME_CLI=1 00:01:13.481 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.481 SPDK_TEST_NVMF_NICS=e810 00:01:13.481 SPDK_TEST_VFIOUSER=1 00:01:13.481 SPDK_RUN_UBSAN=1 00:01:13.481 NET_TYPE=phy 00:01:13.481 RUN_NIGHTLY=0 21:58:38 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:13.481 21:58:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:13.481 21:58:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:13.481 21:58:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:13.481 21:58:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.481 21:58:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.481 21:58:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.481 21:58:38 -- paths/export.sh@5 -- $ export PATH 00:01:13.481 21:58:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.481 21:58:38 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:13.481 21:58:38 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:13.481 21:58:38 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721073518.XXXXXX 00:01:13.481 21:58:38 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721073518.xITvQx 00:01:13.481 21:58:38 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:13.481 21:58:38 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:13.481 21:58:38 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:13.481 21:58:38 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:13.481 21:58:38 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:13.481 21:58:38 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:13.481 21:58:38 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:13.481 21:58:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.481 21:58:38 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:13.481 21:58:38 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:13.481 21:58:38 -- pm/common@17 -- $ local monitor 00:01:13.481 21:58:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.481 21:58:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.481 21:58:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.481 21:58:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.481 21:58:38 -- pm/common@21 -- $ date +%s 00:01:13.481 21:58:38 -- pm/common@21 -- $ date +%s 00:01:13.481 21:58:38 -- pm/common@25 -- $ sleep 1 00:01:13.481 21:58:38 -- pm/common@21 -- $ date +%s 00:01:13.481 21:58:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721073518 00:01:13.481 21:58:38 -- pm/common@21 -- $ date +%s 00:01:13.481 21:58:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721073518 00:01:13.481 21:58:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721073518 00:01:13.481 21:58:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721073518 00:01:13.481 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721073518_collect-cpu-load.pm.log 00:01:13.481 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721073518_collect-vmstat.pm.log 00:01:13.481 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721073518_collect-cpu-temp.pm.log 00:01:13.481 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721073518_collect-bmc-pm.bmc.pm.log 00:01:14.424 21:58:39 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:14.424 21:58:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:14.424 21:58:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:14.424 21:58:39 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.424 21:58:39 -- spdk/autobuild.sh@16 -- $ date -u 00:01:14.424 Mon Jul 15 07:58:39 PM UTC 2024 00:01:14.424 21:58:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:14.424 v24.09-pre-211-ga940d3681 00:01:14.424 21:58:39 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:14.424 21:58:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:14.424 21:58:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:14.424 21:58:39 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:14.424 21:58:39 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:14.424 21:58:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.424 ************************************ 00:01:14.424 START TEST ubsan 00:01:14.424 ************************************ 00:01:14.424 21:58:39 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:14.424 using ubsan 00:01:14.424 00:01:14.424 real 0m0.000s 00:01:14.424 user 0m0.000s 00:01:14.424 sys 0m0.000s 00:01:14.424 21:58:39 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:14.424 21:58:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.424 ************************************ 00:01:14.424 END TEST ubsan 00:01:14.424 ************************************ 00:01:14.424 21:58:39 -- common/autotest_common.sh@1142 -- $ return 0 00:01:14.424 21:58:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:14.424 21:58:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:14.424 21:58:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:14.424 21:58:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:14.424 21:58:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:14.424 21:58:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:14.424 21:58:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:14.424 21:58:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:14.424 21:58:39 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:14.684 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:14.684 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:15.254 Using 'verbs' RDMA provider 00:01:30.832 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:43.066 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:43.066 Creating mk/config.mk...done. 00:01:43.067 Creating mk/cc.flags.mk...done. 00:01:43.067 Type 'make' to build. 00:01:43.067 21:59:07 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:43.067 21:59:07 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:43.067 21:59:07 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:43.067 21:59:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.067 ************************************ 00:01:43.067 START TEST make 00:01:43.067 ************************************ 00:01:43.067 21:59:07 make -- common/autotest_common.sh@1123 -- $ make -j144 00:01:43.067 make[1]: Nothing to be done for 'all'. 00:01:44.446 The Meson build system 00:01:44.446 Version: 1.3.1 00:01:44.446 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:44.446 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.446 Build type: native build 00:01:44.446 Project name: libvfio-user 00:01:44.446 Project version: 0.0.1 00:01:44.446 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:44.446 C linker for the host machine: cc ld.bfd 2.39-16 00:01:44.446 Host machine cpu family: x86_64 00:01:44.446 Host machine cpu: x86_64 00:01:44.446 Run-time dependency threads found: YES 00:01:44.446 Library dl found: YES 00:01:44.446 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:44.446 Run-time dependency json-c found: YES 0.17 00:01:44.446 Run-time dependency cmocka found: YES 1.1.7 00:01:44.446 Program pytest-3 found: NO 00:01:44.446 Program flake8 found: NO 00:01:44.446 Program misspell-fixer found: NO 00:01:44.446 Program restructuredtext-lint found: NO 00:01:44.446 Program valgrind found: YES (/usr/bin/valgrind) 00:01:44.446 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.446 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.446 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.446 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:44.446 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:44.446 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:44.446 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:44.446 Build targets in project: 8 00:01:44.446 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:44.446 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:44.446 00:01:44.446 libvfio-user 0.0.1 00:01:44.446 00:01:44.446 User defined options 00:01:44.446 buildtype : debug 00:01:44.446 default_library: shared 00:01:44.446 libdir : /usr/local/lib 00:01:44.446 00:01:44.446 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.704 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.704 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:44.704 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:44.704 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:44.704 [4/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:44.704 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:44.704 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:44.704 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:44.704 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:44.704 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:44.704 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:44.704 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:44.704 [12/37] Compiling C object samples/null.p/null.c.o 00:01:44.704 [13/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:44.704 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:44.704 [15/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:44.704 [16/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:44.965 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:44.965 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:44.965 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:44.965 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:44.965 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:44.965 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:44.965 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:44.965 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:44.965 [25/37] Compiling C object samples/client.p/client.c.o 00:01:44.965 [26/37] Compiling C object samples/server.p/server.c.o 00:01:44.965 [27/37] Linking target samples/client 00:01:44.965 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:44.965 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:44.965 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:44.965 [31/37] Linking target test/unit_tests 00:01:45.225 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:45.225 [33/37] Linking target samples/lspci 00:01:45.225 [34/37] Linking target samples/null 00:01:45.225 [35/37] Linking target samples/server 00:01:45.225 [36/37] Linking target samples/gpio-pci-idio-16 00:01:45.225 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:45.225 INFO: autodetecting backend as ninja 00:01:45.225 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.225 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.486 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:45.486 ninja: no work to do. 00:01:52.081 The Meson build system 00:01:52.081 Version: 1.3.1 00:01:52.081 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:52.081 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:52.081 Build type: native build 00:01:52.081 Program cat found: YES (/usr/bin/cat) 00:01:52.081 Project name: DPDK 00:01:52.081 Project version: 24.03.0 00:01:52.081 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:52.081 C linker for the host machine: cc ld.bfd 2.39-16 00:01:52.081 Host machine cpu family: x86_64 00:01:52.081 Host machine cpu: x86_64 00:01:52.081 Message: ## Building in Developer Mode ## 00:01:52.081 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:52.081 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:52.081 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:52.081 Program python3 found: YES (/usr/bin/python3) 00:01:52.081 Program cat found: YES (/usr/bin/cat) 00:01:52.081 Compiler for C supports arguments -march=native: YES 00:01:52.081 Checking for size of "void *" : 8 00:01:52.081 Checking for size of "void *" : 8 (cached) 00:01:52.081 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:52.081 Library m found: YES 00:01:52.081 Library numa found: YES 00:01:52.081 Has header "numaif.h" : YES 00:01:52.081 Library fdt found: NO 00:01:52.081 Library execinfo found: NO 00:01:52.081 Has header "execinfo.h" : YES 00:01:52.081 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:52.081 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:52.081 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:52.081 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:52.081 Run-time dependency openssl found: YES 3.0.9 00:01:52.081 Run-time dependency libpcap found: YES 1.10.4 00:01:52.081 Has header "pcap.h" with dependency libpcap: YES 00:01:52.081 Compiler for C supports arguments -Wcast-qual: YES 00:01:52.081 Compiler for C supports arguments -Wdeprecated: YES 00:01:52.081 Compiler for C supports arguments -Wformat: YES 00:01:52.081 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:52.081 Compiler for C supports arguments -Wformat-security: NO 00:01:52.081 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:52.081 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:52.081 Compiler for C supports arguments -Wnested-externs: YES 00:01:52.081 Compiler for C supports arguments -Wold-style-definition: YES 00:01:52.081 Compiler for C supports arguments -Wpointer-arith: YES 00:01:52.081 Compiler for C supports arguments -Wsign-compare: YES 00:01:52.081 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:52.081 Compiler for C supports arguments -Wundef: YES 00:01:52.081 Compiler for C supports arguments -Wwrite-strings: YES 00:01:52.081 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:52.081 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:52.081 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:52.081 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:52.081 Program objdump found: YES (/usr/bin/objdump) 00:01:52.081 Compiler for C supports arguments -mavx512f: YES 00:01:52.081 Checking if "AVX512 checking" compiles: YES 00:01:52.081 Fetching value of define "__SSE4_2__" : 1 00:01:52.081 Fetching value of define "__AES__" : 1 00:01:52.081 Fetching value of define "__AVX__" : 1 00:01:52.081 Fetching value of define "__AVX2__" : 1 00:01:52.081 Fetching value of define "__AVX512BW__" : 1 00:01:52.081 Fetching value of define "__AVX512CD__" : 1 00:01:52.081 Fetching value of define "__AVX512DQ__" : 1 00:01:52.081 Fetching value of define "__AVX512F__" : 1 00:01:52.081 Fetching value of define "__AVX512VL__" : 1 00:01:52.081 Fetching value of define "__PCLMUL__" : 1 00:01:52.081 Fetching value of define "__RDRND__" : 1 00:01:52.081 Fetching value of define "__RDSEED__" : 1 00:01:52.081 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:52.081 Fetching value of define "__znver1__" : (undefined) 00:01:52.081 Fetching value of define "__znver2__" : (undefined) 00:01:52.081 Fetching value of define "__znver3__" : (undefined) 00:01:52.081 Fetching value of define "__znver4__" : (undefined) 00:01:52.081 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:52.081 Message: lib/log: Defining dependency "log" 00:01:52.081 Message: lib/kvargs: Defining dependency "kvargs" 00:01:52.081 Message: lib/telemetry: Defining dependency "telemetry" 00:01:52.081 Checking for function "getentropy" : NO 00:01:52.081 Message: lib/eal: Defining dependency "eal" 00:01:52.081 Message: lib/ring: Defining dependency "ring" 00:01:52.081 Message: lib/rcu: Defining dependency "rcu" 00:01:52.081 Message: lib/mempool: Defining dependency "mempool" 00:01:52.081 Message: lib/mbuf: Defining dependency "mbuf" 00:01:52.081 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:52.081 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:52.081 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:52.081 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:52.081 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:52.081 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:52.081 Compiler for C supports arguments -mpclmul: YES 00:01:52.081 Compiler for C supports arguments -maes: YES 00:01:52.081 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:52.081 Compiler for C supports arguments -mavx512bw: YES 00:01:52.081 Compiler for C supports arguments -mavx512dq: YES 00:01:52.081 Compiler for C supports arguments -mavx512vl: YES 00:01:52.081 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:52.081 Compiler for C supports arguments -mavx2: YES 00:01:52.081 Compiler for C supports arguments -mavx: YES 00:01:52.081 Message: lib/net: Defining dependency "net" 00:01:52.081 Message: lib/meter: Defining dependency "meter" 00:01:52.081 Message: lib/ethdev: Defining dependency "ethdev" 00:01:52.081 Message: lib/pci: Defining dependency "pci" 00:01:52.081 Message: lib/cmdline: Defining dependency "cmdline" 00:01:52.081 Message: lib/hash: Defining dependency "hash" 00:01:52.081 Message: lib/timer: Defining dependency "timer" 00:01:52.081 Message: lib/compressdev: Defining dependency "compressdev" 00:01:52.081 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:52.081 Message: lib/dmadev: Defining dependency "dmadev" 00:01:52.081 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:52.081 Message: lib/power: Defining dependency "power" 00:01:52.081 Message: lib/reorder: Defining dependency "reorder" 00:01:52.081 Message: lib/security: Defining dependency "security" 00:01:52.081 Has header "linux/userfaultfd.h" : YES 00:01:52.081 Has header "linux/vduse.h" : YES 00:01:52.081 Message: lib/vhost: Defining dependency "vhost" 00:01:52.081 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:52.081 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:52.081 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:52.081 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:52.081 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:52.081 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:52.081 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:52.081 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:52.081 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:52.081 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:52.081 Program doxygen found: YES (/usr/bin/doxygen) 00:01:52.081 Configuring doxy-api-html.conf using configuration 00:01:52.081 Configuring doxy-api-man.conf using configuration 00:01:52.081 Program mandb found: YES (/usr/bin/mandb) 00:01:52.081 Program sphinx-build found: NO 00:01:52.081 Configuring rte_build_config.h using configuration 00:01:52.081 Message: 00:01:52.081 ================= 00:01:52.081 Applications Enabled 00:01:52.081 ================= 00:01:52.081 00:01:52.081 apps: 00:01:52.081 00:01:52.081 00:01:52.081 Message: 00:01:52.081 ================= 00:01:52.081 Libraries Enabled 00:01:52.081 ================= 00:01:52.081 00:01:52.081 libs: 00:01:52.081 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:52.081 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:52.081 cryptodev, dmadev, power, reorder, security, vhost, 00:01:52.081 00:01:52.081 Message: 00:01:52.081 =============== 00:01:52.081 Drivers Enabled 00:01:52.081 =============== 00:01:52.081 00:01:52.081 common: 00:01:52.081 00:01:52.081 bus: 00:01:52.081 pci, vdev, 00:01:52.081 mempool: 00:01:52.081 ring, 00:01:52.081 dma: 00:01:52.081 00:01:52.081 net: 00:01:52.081 00:01:52.081 crypto: 00:01:52.081 00:01:52.081 compress: 00:01:52.081 00:01:52.081 vdpa: 00:01:52.081 00:01:52.081 00:01:52.081 Message: 00:01:52.081 ================= 00:01:52.081 Content Skipped 00:01:52.081 ================= 00:01:52.081 00:01:52.081 apps: 00:01:52.081 dumpcap: explicitly disabled via build config 00:01:52.081 graph: explicitly disabled via build config 00:01:52.081 pdump: explicitly disabled via build config 00:01:52.081 proc-info: explicitly disabled via build config 00:01:52.081 test-acl: explicitly disabled via build config 00:01:52.081 test-bbdev: explicitly disabled via build config 00:01:52.081 test-cmdline: explicitly disabled via build config 00:01:52.081 test-compress-perf: explicitly disabled via build config 00:01:52.081 test-crypto-perf: explicitly disabled via build config 00:01:52.081 test-dma-perf: explicitly disabled via build config 00:01:52.081 test-eventdev: explicitly disabled via build config 00:01:52.081 test-fib: explicitly disabled via build config 00:01:52.081 test-flow-perf: explicitly disabled via build config 00:01:52.081 test-gpudev: explicitly disabled via build config 00:01:52.082 test-mldev: explicitly disabled via build config 00:01:52.082 test-pipeline: explicitly disabled via build config 00:01:52.082 test-pmd: explicitly disabled via build config 00:01:52.082 test-regex: explicitly disabled via build config 00:01:52.082 test-sad: explicitly disabled via build config 00:01:52.082 test-security-perf: explicitly disabled via build config 00:01:52.082 00:01:52.082 libs: 00:01:52.082 argparse: explicitly disabled via build config 00:01:52.082 metrics: explicitly disabled via build config 00:01:52.082 acl: explicitly disabled via build config 00:01:52.082 bbdev: explicitly disabled via build config 00:01:52.082 bitratestats: explicitly disabled via build config 00:01:52.082 bpf: explicitly disabled via build config 00:01:52.082 cfgfile: explicitly disabled via build config 00:01:52.082 distributor: explicitly disabled via build config 00:01:52.082 efd: explicitly disabled via build config 00:01:52.082 eventdev: explicitly disabled via build config 00:01:52.082 dispatcher: explicitly disabled via build config 00:01:52.082 gpudev: explicitly disabled via build config 00:01:52.082 gro: explicitly disabled via build config 00:01:52.082 gso: explicitly disabled via build config 00:01:52.082 ip_frag: explicitly disabled via build config 00:01:52.082 jobstats: explicitly disabled via build config 00:01:52.082 latencystats: explicitly disabled via build config 00:01:52.082 lpm: explicitly disabled via build config 00:01:52.082 member: explicitly disabled via build config 00:01:52.082 pcapng: explicitly disabled via build config 00:01:52.082 rawdev: explicitly disabled via build config 00:01:52.082 regexdev: explicitly disabled via build config 00:01:52.082 mldev: explicitly disabled via build config 00:01:52.082 rib: explicitly disabled via build config 00:01:52.082 sched: explicitly disabled via build config 00:01:52.082 stack: explicitly disabled via build config 00:01:52.082 ipsec: explicitly disabled via build config 00:01:52.082 pdcp: explicitly disabled via build config 00:01:52.082 fib: explicitly disabled via build config 00:01:52.082 port: explicitly disabled via build config 00:01:52.082 pdump: explicitly disabled via build config 00:01:52.082 table: explicitly disabled via build config 00:01:52.082 pipeline: explicitly disabled via build config 00:01:52.082 graph: explicitly disabled via build config 00:01:52.082 node: explicitly disabled via build config 00:01:52.082 00:01:52.082 drivers: 00:01:52.082 common/cpt: not in enabled drivers build config 00:01:52.082 common/dpaax: not in enabled drivers build config 00:01:52.082 common/iavf: not in enabled drivers build config 00:01:52.082 common/idpf: not in enabled drivers build config 00:01:52.082 common/ionic: not in enabled drivers build config 00:01:52.082 common/mvep: not in enabled drivers build config 00:01:52.082 common/octeontx: not in enabled drivers build config 00:01:52.082 bus/auxiliary: not in enabled drivers build config 00:01:52.082 bus/cdx: not in enabled drivers build config 00:01:52.082 bus/dpaa: not in enabled drivers build config 00:01:52.082 bus/fslmc: not in enabled drivers build config 00:01:52.082 bus/ifpga: not in enabled drivers build config 00:01:52.082 bus/platform: not in enabled drivers build config 00:01:52.082 bus/uacce: not in enabled drivers build config 00:01:52.082 bus/vmbus: not in enabled drivers build config 00:01:52.082 common/cnxk: not in enabled drivers build config 00:01:52.082 common/mlx5: not in enabled drivers build config 00:01:52.082 common/nfp: not in enabled drivers build config 00:01:52.082 common/nitrox: not in enabled drivers build config 00:01:52.082 common/qat: not in enabled drivers build config 00:01:52.082 common/sfc_efx: not in enabled drivers build config 00:01:52.082 mempool/bucket: not in enabled drivers build config 00:01:52.082 mempool/cnxk: not in enabled drivers build config 00:01:52.082 mempool/dpaa: not in enabled drivers build config 00:01:52.082 mempool/dpaa2: not in enabled drivers build config 00:01:52.082 mempool/octeontx: not in enabled drivers build config 00:01:52.082 mempool/stack: not in enabled drivers build config 00:01:52.082 dma/cnxk: not in enabled drivers build config 00:01:52.082 dma/dpaa: not in enabled drivers build config 00:01:52.082 dma/dpaa2: not in enabled drivers build config 00:01:52.082 dma/hisilicon: not in enabled drivers build config 00:01:52.082 dma/idxd: not in enabled drivers build config 00:01:52.082 dma/ioat: not in enabled drivers build config 00:01:52.082 dma/skeleton: not in enabled drivers build config 00:01:52.082 net/af_packet: not in enabled drivers build config 00:01:52.082 net/af_xdp: not in enabled drivers build config 00:01:52.082 net/ark: not in enabled drivers build config 00:01:52.082 net/atlantic: not in enabled drivers build config 00:01:52.082 net/avp: not in enabled drivers build config 00:01:52.082 net/axgbe: not in enabled drivers build config 00:01:52.082 net/bnx2x: not in enabled drivers build config 00:01:52.082 net/bnxt: not in enabled drivers build config 00:01:52.082 net/bonding: not in enabled drivers build config 00:01:52.082 net/cnxk: not in enabled drivers build config 00:01:52.082 net/cpfl: not in enabled drivers build config 00:01:52.082 net/cxgbe: not in enabled drivers build config 00:01:52.082 net/dpaa: not in enabled drivers build config 00:01:52.082 net/dpaa2: not in enabled drivers build config 00:01:52.082 net/e1000: not in enabled drivers build config 00:01:52.082 net/ena: not in enabled drivers build config 00:01:52.082 net/enetc: not in enabled drivers build config 00:01:52.082 net/enetfec: not in enabled drivers build config 00:01:52.082 net/enic: not in enabled drivers build config 00:01:52.082 net/failsafe: not in enabled drivers build config 00:01:52.082 net/fm10k: not in enabled drivers build config 00:01:52.082 net/gve: not in enabled drivers build config 00:01:52.082 net/hinic: not in enabled drivers build config 00:01:52.082 net/hns3: not in enabled drivers build config 00:01:52.082 net/i40e: not in enabled drivers build config 00:01:52.082 net/iavf: not in enabled drivers build config 00:01:52.082 net/ice: not in enabled drivers build config 00:01:52.082 net/idpf: not in enabled drivers build config 00:01:52.082 net/igc: not in enabled drivers build config 00:01:52.082 net/ionic: not in enabled drivers build config 00:01:52.082 net/ipn3ke: not in enabled drivers build config 00:01:52.082 net/ixgbe: not in enabled drivers build config 00:01:52.082 net/mana: not in enabled drivers build config 00:01:52.082 net/memif: not in enabled drivers build config 00:01:52.082 net/mlx4: not in enabled drivers build config 00:01:52.082 net/mlx5: not in enabled drivers build config 00:01:52.082 net/mvneta: not in enabled drivers build config 00:01:52.082 net/mvpp2: not in enabled drivers build config 00:01:52.082 net/netvsc: not in enabled drivers build config 00:01:52.082 net/nfb: not in enabled drivers build config 00:01:52.082 net/nfp: not in enabled drivers build config 00:01:52.082 net/ngbe: not in enabled drivers build config 00:01:52.082 net/null: not in enabled drivers build config 00:01:52.082 net/octeontx: not in enabled drivers build config 00:01:52.082 net/octeon_ep: not in enabled drivers build config 00:01:52.082 net/pcap: not in enabled drivers build config 00:01:52.082 net/pfe: not in enabled drivers build config 00:01:52.082 net/qede: not in enabled drivers build config 00:01:52.082 net/ring: not in enabled drivers build config 00:01:52.082 net/sfc: not in enabled drivers build config 00:01:52.082 net/softnic: not in enabled drivers build config 00:01:52.082 net/tap: not in enabled drivers build config 00:01:52.082 net/thunderx: not in enabled drivers build config 00:01:52.082 net/txgbe: not in enabled drivers build config 00:01:52.082 net/vdev_netvsc: not in enabled drivers build config 00:01:52.082 net/vhost: not in enabled drivers build config 00:01:52.082 net/virtio: not in enabled drivers build config 00:01:52.082 net/vmxnet3: not in enabled drivers build config 00:01:52.082 raw/*: missing internal dependency, "rawdev" 00:01:52.082 crypto/armv8: not in enabled drivers build config 00:01:52.082 crypto/bcmfs: not in enabled drivers build config 00:01:52.082 crypto/caam_jr: not in enabled drivers build config 00:01:52.082 crypto/ccp: not in enabled drivers build config 00:01:52.082 crypto/cnxk: not in enabled drivers build config 00:01:52.082 crypto/dpaa_sec: not in enabled drivers build config 00:01:52.082 crypto/dpaa2_sec: not in enabled drivers build config 00:01:52.082 crypto/ipsec_mb: not in enabled drivers build config 00:01:52.082 crypto/mlx5: not in enabled drivers build config 00:01:52.082 crypto/mvsam: not in enabled drivers build config 00:01:52.082 crypto/nitrox: not in enabled drivers build config 00:01:52.082 crypto/null: not in enabled drivers build config 00:01:52.082 crypto/octeontx: not in enabled drivers build config 00:01:52.082 crypto/openssl: not in enabled drivers build config 00:01:52.082 crypto/scheduler: not in enabled drivers build config 00:01:52.082 crypto/uadk: not in enabled drivers build config 00:01:52.082 crypto/virtio: not in enabled drivers build config 00:01:52.082 compress/isal: not in enabled drivers build config 00:01:52.082 compress/mlx5: not in enabled drivers build config 00:01:52.082 compress/nitrox: not in enabled drivers build config 00:01:52.082 compress/octeontx: not in enabled drivers build config 00:01:52.082 compress/zlib: not in enabled drivers build config 00:01:52.082 regex/*: missing internal dependency, "regexdev" 00:01:52.082 ml/*: missing internal dependency, "mldev" 00:01:52.083 vdpa/ifc: not in enabled drivers build config 00:01:52.083 vdpa/mlx5: not in enabled drivers build config 00:01:52.083 vdpa/nfp: not in enabled drivers build config 00:01:52.083 vdpa/sfc: not in enabled drivers build config 00:01:52.083 event/*: missing internal dependency, "eventdev" 00:01:52.083 baseband/*: missing internal dependency, "bbdev" 00:01:52.083 gpu/*: missing internal dependency, "gpudev" 00:01:52.083 00:01:52.083 00:01:52.083 Build targets in project: 84 00:01:52.083 00:01:52.083 DPDK 24.03.0 00:01:52.083 00:01:52.083 User defined options 00:01:52.083 buildtype : debug 00:01:52.083 default_library : shared 00:01:52.083 libdir : lib 00:01:52.083 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:52.083 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:52.083 c_link_args : 00:01:52.083 cpu_instruction_set: native 00:01:52.083 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:52.083 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:52.083 enable_docs : false 00:01:52.083 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:52.083 enable_kmods : false 00:01:52.083 max_lcores : 128 00:01:52.083 tests : false 00:01:52.083 00:01:52.083 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:52.083 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:52.083 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:52.083 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:52.083 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:52.083 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:52.083 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:52.083 [6/267] Linking static target lib/librte_kvargs.a 00:01:52.083 [7/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:52.083 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:52.083 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:52.083 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:52.083 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:52.083 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:52.083 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:52.083 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:52.345 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:52.345 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:52.345 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:52.345 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:52.345 [19/267] Linking static target lib/librte_log.a 00:01:52.345 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:52.345 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:52.345 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:52.345 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:52.345 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:52.345 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:52.345 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:52.345 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:52.345 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:52.345 [29/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:52.345 [30/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.345 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:52.345 [32/267] Linking static target lib/librte_pci.a 00:01:52.345 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:52.345 [34/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.345 [35/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:52.345 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:52.345 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.345 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:52.604 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:52.604 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:52.604 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:52.604 [42/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:52.604 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.604 [44/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:52.604 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:52.604 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:52.604 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:52.604 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:52.604 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:52.604 [50/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.604 [51/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.604 [52/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.604 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:52.604 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:52.604 [55/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:52.604 [56/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.604 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:52.604 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:52.604 [59/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:52.604 [60/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:52.604 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:52.604 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:52.604 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:52.604 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:52.604 [65/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:52.604 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.604 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:52.604 [68/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:52.604 [69/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.604 [70/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:52.604 [71/267] Linking static target lib/librte_ring.a 00:01:52.604 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:52.604 [73/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:52.604 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.604 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:52.604 [76/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:52.604 [77/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:52.604 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:52.604 [79/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.604 [80/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:52.604 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:52.604 [82/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.604 [83/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:52.604 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.604 [85/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.604 [86/267] Linking static target lib/librte_telemetry.a 00:01:52.865 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:52.865 [88/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:52.865 [89/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.865 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:52.865 [91/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.865 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:52.865 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.865 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.865 [95/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:52.865 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:52.865 [97/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.865 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.865 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.865 [100/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:52.865 [101/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:52.865 [102/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:52.865 [103/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:52.865 [104/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.865 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:52.865 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.865 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:52.865 [108/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:52.865 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:52.865 [110/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:52.865 [111/267] Linking static target lib/librte_meter.a 00:01:52.865 [112/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.865 [113/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:52.865 [114/267] Linking static target lib/librte_mbuf.a 00:01:52.865 [115/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:52.865 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.865 [117/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.865 [118/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.865 [119/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:52.865 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:52.865 [121/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:52.865 [122/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:52.865 [123/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.865 [124/267] Linking static target lib/librte_net.a 00:01:52.865 [125/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.865 [126/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:52.865 [127/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.865 [128/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.865 [129/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.865 [130/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.865 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.865 [132/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.865 [133/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.865 [134/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.865 [135/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.865 [136/267] Linking static target lib/librte_rcu.a 00:01:52.865 [137/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.865 [138/267] Linking static target lib/librte_compressdev.a 00:01:52.865 [139/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.865 [140/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.865 [141/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:52.865 [142/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.865 [143/267] Linking static target lib/librte_mempool.a 00:01:52.865 [144/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.865 [145/267] Linking static target lib/librte_timer.a 00:01:52.865 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.865 [147/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:52.865 [148/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:52.865 [149/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:52.865 [150/267] Linking static target lib/librte_security.a 00:01:52.865 [151/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:52.865 [152/267] Linking static target lib/librte_cmdline.a 00:01:52.865 [153/267] Linking static target lib/librte_dmadev.a 00:01:52.865 [154/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:52.865 [155/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:52.865 [156/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.865 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:52.865 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.865 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:52.865 [160/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:52.865 [161/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.865 [162/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:52.865 [163/267] Linking target lib/librte_log.so.24.1 00:01:52.865 [164/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.865 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.865 [166/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:52.865 [167/267] Linking static target lib/librte_power.a 00:01:52.865 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.865 [169/267] Linking static target lib/librte_reorder.a 00:01:52.865 [170/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:52.865 [171/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:52.865 [172/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:52.865 [173/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:52.865 [174/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:52.865 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:53.126 [176/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.126 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:53.126 [178/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:53.126 [179/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:53.126 [180/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.126 [181/267] Linking static target drivers/librte_bus_vdev.a 00:01:53.126 [182/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:53.126 [183/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:53.126 [184/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:53.126 [185/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.126 [186/267] Linking static target lib/librte_eal.a 00:01:53.126 [187/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.126 [188/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:53.126 [189/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:53.126 [190/267] Linking target lib/librte_kvargs.so.24.1 00:01:53.126 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:53.126 [192/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.126 [193/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:53.126 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:53.126 [195/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:53.126 [196/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:53.126 [197/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:53.126 [198/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:53.126 [199/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:53.126 [200/267] Linking static target drivers/librte_mempool_ring.a 00:01:53.126 [201/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:53.126 [202/267] Linking static target drivers/librte_bus_pci.a 00:01:53.126 [203/267] Linking static target lib/librte_hash.a 00:01:53.126 [204/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.387 [205/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:53.387 [206/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.388 [207/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:53.388 [208/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:53.388 [209/267] Linking target lib/librte_telemetry.so.24.1 00:01:53.388 [210/267] Linking static target lib/librte_cryptodev.a 00:01:53.388 [211/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.388 [212/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.388 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.388 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:53.388 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.648 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.648 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.648 [218/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.648 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:53.648 [220/267] Linking static target lib/librte_ethdev.a 00:01:53.910 [221/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:53.910 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.910 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.171 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.171 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.171 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.432 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:54.432 [228/267] Linking static target lib/librte_vhost.a 00:01:55.376 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.762 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.431 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.371 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.631 [233/267] Linking target lib/librte_eal.so.24.1 00:02:04.631 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:04.891 [235/267] Linking target lib/librte_pci.so.24.1 00:02:04.891 [236/267] Linking target lib/librte_ring.so.24.1 00:02:04.891 [237/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:04.891 [238/267] Linking target lib/librte_timer.so.24.1 00:02:04.891 [239/267] Linking target lib/librte_meter.so.24.1 00:02:04.891 [240/267] Linking target lib/librte_dmadev.so.24.1 00:02:04.891 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:04.891 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:04.891 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:04.891 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:04.891 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:04.891 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:04.891 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:04.891 [248/267] Linking target lib/librte_mempool.so.24.1 00:02:05.151 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:05.151 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:05.151 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:05.151 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:05.151 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:05.412 [254/267] Linking target lib/librte_net.so.24.1 00:02:05.412 [255/267] Linking target lib/librte_reorder.so.24.1 00:02:05.412 [256/267] Linking target lib/librte_compressdev.so.24.1 00:02:05.412 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:05.412 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:05.412 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:05.412 [260/267] Linking target lib/librte_hash.so.24.1 00:02:05.412 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:05.673 [262/267] Linking target lib/librte_security.so.24.1 00:02:05.673 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:05.673 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:05.673 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:05.673 [266/267] Linking target lib/librte_power.so.24.1 00:02:05.673 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:05.673 INFO: autodetecting backend as ninja 00:02:05.673 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:07.075 CC lib/log/log.o 00:02:07.075 CC lib/log/log_flags.o 00:02:07.075 CC lib/log/log_deprecated.o 00:02:07.075 CC lib/ut/ut.o 00:02:07.075 CC lib/ut_mock/mock.o 00:02:07.075 LIB libspdk_log.a 00:02:07.075 LIB libspdk_ut.a 00:02:07.075 LIB libspdk_ut_mock.a 00:02:07.075 SO libspdk_log.so.7.0 00:02:07.075 SO libspdk_ut_mock.so.6.0 00:02:07.075 SO libspdk_ut.so.2.0 00:02:07.075 SYMLINK libspdk_ut_mock.so 00:02:07.075 SYMLINK libspdk_log.so 00:02:07.075 SYMLINK libspdk_ut.so 00:02:07.648 CC lib/dma/dma.o 00:02:07.648 CC lib/util/base64.o 00:02:07.648 CC lib/util/bit_array.o 00:02:07.648 CC lib/util/cpuset.o 00:02:07.648 CC lib/util/crc32.o 00:02:07.648 CC lib/util/crc16.o 00:02:07.648 CC lib/util/crc32c.o 00:02:07.648 CXX lib/trace_parser/trace.o 00:02:07.648 CC lib/util/crc32_ieee.o 00:02:07.648 CC lib/util/crc64.o 00:02:07.648 CC lib/util/dif.o 00:02:07.648 CC lib/util/fd.o 00:02:07.648 CC lib/ioat/ioat.o 00:02:07.648 CC lib/util/file.o 00:02:07.648 CC lib/util/hexlify.o 00:02:07.648 CC lib/util/iov.o 00:02:07.648 CC lib/util/math.o 00:02:07.648 CC lib/util/pipe.o 00:02:07.648 CC lib/util/strerror_tls.o 00:02:07.648 CC lib/util/string.o 00:02:07.648 CC lib/util/uuid.o 00:02:07.648 CC lib/util/xor.o 00:02:07.648 CC lib/util/fd_group.o 00:02:07.648 CC lib/util/zipf.o 00:02:07.648 CC lib/vfio_user/host/vfio_user.o 00:02:07.648 CC lib/vfio_user/host/vfio_user_pci.o 00:02:07.648 LIB libspdk_dma.a 00:02:07.909 SO libspdk_dma.so.4.0 00:02:07.909 LIB libspdk_ioat.a 00:02:07.909 SYMLINK libspdk_dma.so 00:02:07.909 SO libspdk_ioat.so.7.0 00:02:07.909 SYMLINK libspdk_ioat.so 00:02:07.909 LIB libspdk_vfio_user.a 00:02:07.909 SO libspdk_vfio_user.so.5.0 00:02:08.168 LIB libspdk_util.a 00:02:08.168 SYMLINK libspdk_vfio_user.so 00:02:08.168 SO libspdk_util.so.9.1 00:02:08.168 SYMLINK libspdk_util.so 00:02:08.429 LIB libspdk_trace_parser.a 00:02:08.429 SO libspdk_trace_parser.so.5.0 00:02:08.429 SYMLINK libspdk_trace_parser.so 00:02:08.691 CC lib/rdma_utils/rdma_utils.o 00:02:08.691 CC lib/env_dpdk/memory.o 00:02:08.691 CC lib/env_dpdk/env.o 00:02:08.691 CC lib/env_dpdk/pci.o 00:02:08.691 CC lib/env_dpdk/init.o 00:02:08.691 CC lib/env_dpdk/threads.o 00:02:08.691 CC lib/env_dpdk/pci_ioat.o 00:02:08.691 CC lib/env_dpdk/pci_virtio.o 00:02:08.691 CC lib/env_dpdk/pci_vmd.o 00:02:08.691 CC lib/env_dpdk/pci_idxd.o 00:02:08.691 CC lib/env_dpdk/pci_event.o 00:02:08.691 CC lib/rdma_provider/common.o 00:02:08.691 CC lib/vmd/vmd.o 00:02:08.691 CC lib/env_dpdk/sigbus_handler.o 00:02:08.691 CC lib/conf/conf.o 00:02:08.691 CC lib/env_dpdk/pci_dpdk.o 00:02:08.691 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:08.691 CC lib/json/json_parse.o 00:02:08.691 CC lib/vmd/led.o 00:02:08.691 CC lib/idxd/idxd.o 00:02:08.691 CC lib/json/json_util.o 00:02:08.691 CC lib/idxd/idxd_user.o 00:02:08.691 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:08.691 CC lib/json/json_write.o 00:02:08.691 CC lib/idxd/idxd_kernel.o 00:02:08.691 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:08.952 LIB libspdk_rdma_provider.a 00:02:08.952 LIB libspdk_conf.a 00:02:08.952 LIB libspdk_rdma_utils.a 00:02:08.952 SO libspdk_rdma_provider.so.6.0 00:02:08.952 SO libspdk_conf.so.6.0 00:02:08.952 SO libspdk_rdma_utils.so.1.0 00:02:08.952 LIB libspdk_json.a 00:02:08.952 SYMLINK libspdk_rdma_provider.so 00:02:08.952 SYMLINK libspdk_conf.so 00:02:08.952 SO libspdk_json.so.6.0 00:02:08.952 SYMLINK libspdk_rdma_utils.so 00:02:08.952 SYMLINK libspdk_json.so 00:02:09.212 LIB libspdk_idxd.a 00:02:09.212 SO libspdk_idxd.so.12.0 00:02:09.212 LIB libspdk_vmd.a 00:02:09.212 SO libspdk_vmd.so.6.0 00:02:09.212 SYMLINK libspdk_idxd.so 00:02:09.212 SYMLINK libspdk_vmd.so 00:02:09.473 CC lib/jsonrpc/jsonrpc_server.o 00:02:09.473 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:09.473 CC lib/jsonrpc/jsonrpc_client.o 00:02:09.473 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:09.735 LIB libspdk_jsonrpc.a 00:02:09.735 SO libspdk_jsonrpc.so.6.0 00:02:09.735 SYMLINK libspdk_jsonrpc.so 00:02:09.735 LIB libspdk_env_dpdk.a 00:02:09.995 SO libspdk_env_dpdk.so.14.1 00:02:09.995 SYMLINK libspdk_env_dpdk.so 00:02:10.256 CC lib/rpc/rpc.o 00:02:10.256 LIB libspdk_rpc.a 00:02:10.517 SO libspdk_rpc.so.6.0 00:02:10.517 SYMLINK libspdk_rpc.so 00:02:10.777 CC lib/trace/trace.o 00:02:10.777 CC lib/trace/trace_flags.o 00:02:10.777 CC lib/notify/notify.o 00:02:10.777 CC lib/trace/trace_rpc.o 00:02:10.777 CC lib/notify/notify_rpc.o 00:02:10.777 CC lib/keyring/keyring.o 00:02:10.777 CC lib/keyring/keyring_rpc.o 00:02:11.038 LIB libspdk_notify.a 00:02:11.038 LIB libspdk_keyring.a 00:02:11.038 SO libspdk_notify.so.6.0 00:02:11.038 LIB libspdk_trace.a 00:02:11.038 SO libspdk_keyring.so.1.0 00:02:11.038 SYMLINK libspdk_notify.so 00:02:11.038 SO libspdk_trace.so.10.0 00:02:11.299 SYMLINK libspdk_keyring.so 00:02:11.299 SYMLINK libspdk_trace.so 00:02:11.561 CC lib/thread/thread.o 00:02:11.561 CC lib/thread/iobuf.o 00:02:11.561 CC lib/sock/sock.o 00:02:11.561 CC lib/sock/sock_rpc.o 00:02:12.134 LIB libspdk_sock.a 00:02:12.134 SO libspdk_sock.so.10.0 00:02:12.134 SYMLINK libspdk_sock.so 00:02:12.395 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:12.395 CC lib/nvme/nvme_fabric.o 00:02:12.395 CC lib/nvme/nvme_ctrlr.o 00:02:12.395 CC lib/nvme/nvme_ns_cmd.o 00:02:12.395 CC lib/nvme/nvme_ns.o 00:02:12.395 CC lib/nvme/nvme_pcie_common.o 00:02:12.395 CC lib/nvme/nvme_qpair.o 00:02:12.395 CC lib/nvme/nvme_pcie.o 00:02:12.395 CC lib/nvme/nvme.o 00:02:12.395 CC lib/nvme/nvme_discovery.o 00:02:12.395 CC lib/nvme/nvme_quirks.o 00:02:12.395 CC lib/nvme/nvme_transport.o 00:02:12.395 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:12.395 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:12.395 CC lib/nvme/nvme_tcp.o 00:02:12.395 CC lib/nvme/nvme_opal.o 00:02:12.395 CC lib/nvme/nvme_io_msg.o 00:02:12.395 CC lib/nvme/nvme_poll_group.o 00:02:12.395 CC lib/nvme/nvme_zns.o 00:02:12.395 CC lib/nvme/nvme_stubs.o 00:02:12.395 CC lib/nvme/nvme_auth.o 00:02:12.395 CC lib/nvme/nvme_cuse.o 00:02:12.395 CC lib/nvme/nvme_vfio_user.o 00:02:12.395 CC lib/nvme/nvme_rdma.o 00:02:12.967 LIB libspdk_thread.a 00:02:12.967 SO libspdk_thread.so.10.1 00:02:12.967 SYMLINK libspdk_thread.so 00:02:13.228 CC lib/accel/accel.o 00:02:13.228 CC lib/accel/accel_rpc.o 00:02:13.228 CC lib/accel/accel_sw.o 00:02:13.228 CC lib/virtio/virtio.o 00:02:13.228 CC lib/virtio/virtio_vhost_user.o 00:02:13.228 CC lib/virtio/virtio_vfio_user.o 00:02:13.228 CC lib/virtio/virtio_pci.o 00:02:13.228 CC lib/blob/blobstore.o 00:02:13.228 CC lib/blob/request.o 00:02:13.228 CC lib/blob/zeroes.o 00:02:13.228 CC lib/blob/blob_bs_dev.o 00:02:13.228 CC lib/vfu_tgt/tgt_endpoint.o 00:02:13.489 CC lib/vfu_tgt/tgt_rpc.o 00:02:13.489 CC lib/init/subsystem.o 00:02:13.489 CC lib/init/json_config.o 00:02:13.489 CC lib/init/subsystem_rpc.o 00:02:13.489 CC lib/init/rpc.o 00:02:13.749 LIB libspdk_init.a 00:02:13.749 LIB libspdk_virtio.a 00:02:13.749 SO libspdk_init.so.5.0 00:02:13.750 LIB libspdk_vfu_tgt.a 00:02:13.750 SO libspdk_virtio.so.7.0 00:02:13.750 SO libspdk_vfu_tgt.so.3.0 00:02:13.750 SYMLINK libspdk_init.so 00:02:13.750 SYMLINK libspdk_virtio.so 00:02:13.750 SYMLINK libspdk_vfu_tgt.so 00:02:14.010 CC lib/event/app.o 00:02:14.010 CC lib/event/reactor.o 00:02:14.010 CC lib/event/log_rpc.o 00:02:14.010 CC lib/event/app_rpc.o 00:02:14.010 CC lib/event/scheduler_static.o 00:02:14.279 LIB libspdk_accel.a 00:02:14.279 SO libspdk_accel.so.15.1 00:02:14.279 LIB libspdk_nvme.a 00:02:14.279 SYMLINK libspdk_accel.so 00:02:14.540 SO libspdk_nvme.so.13.1 00:02:14.541 LIB libspdk_event.a 00:02:14.541 SO libspdk_event.so.14.0 00:02:14.541 SYMLINK libspdk_nvme.so 00:02:14.541 SYMLINK libspdk_event.so 00:02:14.541 CC lib/bdev/bdev.o 00:02:14.541 CC lib/bdev/bdev_rpc.o 00:02:14.541 CC lib/bdev/scsi_nvme.o 00:02:14.541 CC lib/bdev/bdev_zone.o 00:02:14.541 CC lib/bdev/part.o 00:02:15.971 LIB libspdk_blob.a 00:02:15.971 SO libspdk_blob.so.11.0 00:02:15.971 SYMLINK libspdk_blob.so 00:02:16.232 CC lib/blobfs/blobfs.o 00:02:16.232 CC lib/blobfs/tree.o 00:02:16.493 CC lib/lvol/lvol.o 00:02:16.754 LIB libspdk_bdev.a 00:02:17.016 SO libspdk_bdev.so.15.1 00:02:17.016 SYMLINK libspdk_bdev.so 00:02:17.016 LIB libspdk_blobfs.a 00:02:17.016 SO libspdk_blobfs.so.10.0 00:02:17.277 LIB libspdk_lvol.a 00:02:17.277 SYMLINK libspdk_blobfs.so 00:02:17.277 SO libspdk_lvol.so.10.0 00:02:17.277 SYMLINK libspdk_lvol.so 00:02:17.277 CC lib/ftl/ftl_core.o 00:02:17.277 CC lib/ftl/ftl_init.o 00:02:17.277 CC lib/ftl/ftl_layout.o 00:02:17.277 CC lib/ftl/ftl_debug.o 00:02:17.277 CC lib/ftl/ftl_io.o 00:02:17.277 CC lib/ftl/ftl_sb.o 00:02:17.277 CC lib/ftl/ftl_l2p_flat.o 00:02:17.277 CC lib/ftl/ftl_l2p.o 00:02:17.277 CC lib/ftl/ftl_nv_cache.o 00:02:17.277 CC lib/ftl/ftl_band.o 00:02:17.277 CC lib/nvmf/ctrlr.o 00:02:17.277 CC lib/ftl/ftl_band_ops.o 00:02:17.277 CC lib/ftl/ftl_writer.o 00:02:17.277 CC lib/nvmf/ctrlr_discovery.o 00:02:17.277 CC lib/ftl/ftl_rq.o 00:02:17.277 CC lib/nvmf/ctrlr_bdev.o 00:02:17.277 CC lib/ublk/ublk.o 00:02:17.277 CC lib/ftl/ftl_reloc.o 00:02:17.277 CC lib/ftl/ftl_l2p_cache.o 00:02:17.277 CC lib/nvmf/subsystem.o 00:02:17.277 CC lib/ublk/ublk_rpc.o 00:02:17.277 CC lib/ftl/ftl_p2l.o 00:02:17.277 CC lib/nvmf/nvmf.o 00:02:17.277 CC lib/ftl/mngt/ftl_mngt.o 00:02:17.277 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:17.277 CC lib/nvmf/nvmf_rpc.o 00:02:17.277 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:17.277 CC lib/scsi/dev.o 00:02:17.277 CC lib/nvmf/transport.o 00:02:17.277 CC lib/nvmf/stubs.o 00:02:17.277 CC lib/nvmf/tcp.o 00:02:17.277 CC lib/scsi/lun.o 00:02:17.277 CC lib/nbd/nbd.o 00:02:17.277 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:17.277 CC lib/nvmf/mdns_server.o 00:02:17.277 CC lib/nbd/nbd_rpc.o 00:02:17.277 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:17.277 CC lib/nvmf/vfio_user.o 00:02:17.277 CC lib/scsi/scsi.o 00:02:17.277 CC lib/scsi/port.o 00:02:17.277 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:17.277 CC lib/scsi/scsi_bdev.o 00:02:17.277 CC lib/scsi/scsi_pr.o 00:02:17.277 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:17.277 CC lib/nvmf/rdma.o 00:02:17.277 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:17.277 CC lib/scsi/scsi_rpc.o 00:02:17.277 CC lib/nvmf/auth.o 00:02:17.536 CC lib/scsi/task.o 00:02:17.536 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:17.536 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:17.536 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:17.536 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:17.536 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:17.536 CC lib/ftl/utils/ftl_conf.o 00:02:17.536 CC lib/ftl/utils/ftl_md.o 00:02:17.536 CC lib/ftl/utils/ftl_mempool.o 00:02:17.536 CC lib/ftl/utils/ftl_bitmap.o 00:02:17.536 CC lib/ftl/utils/ftl_property.o 00:02:17.536 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:17.536 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:17.536 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:17.536 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:17.536 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:17.536 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:17.536 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:17.536 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:17.536 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:17.536 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:17.536 CC lib/ftl/base/ftl_base_dev.o 00:02:17.536 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:17.536 CC lib/ftl/ftl_trace.o 00:02:17.536 CC lib/ftl/base/ftl_base_bdev.o 00:02:18.122 LIB libspdk_nbd.a 00:02:18.122 LIB libspdk_scsi.a 00:02:18.122 SO libspdk_nbd.so.7.0 00:02:18.122 SO libspdk_scsi.so.9.0 00:02:18.122 SYMLINK libspdk_nbd.so 00:02:18.122 SYMLINK libspdk_scsi.so 00:02:18.122 LIB libspdk_ublk.a 00:02:18.122 SO libspdk_ublk.so.3.0 00:02:18.122 SYMLINK libspdk_ublk.so 00:02:18.383 LIB libspdk_ftl.a 00:02:18.383 CC lib/iscsi/conn.o 00:02:18.383 CC lib/iscsi/init_grp.o 00:02:18.383 CC lib/iscsi/param.o 00:02:18.383 CC lib/iscsi/iscsi.o 00:02:18.383 CC lib/iscsi/md5.o 00:02:18.383 CC lib/vhost/vhost.o 00:02:18.383 CC lib/iscsi/portal_grp.o 00:02:18.383 CC lib/vhost/vhost_rpc.o 00:02:18.383 CC lib/iscsi/tgt_node.o 00:02:18.383 CC lib/vhost/vhost_scsi.o 00:02:18.383 CC lib/iscsi/iscsi_subsystem.o 00:02:18.383 CC lib/iscsi/iscsi_rpc.o 00:02:18.383 CC lib/vhost/vhost_blk.o 00:02:18.383 CC lib/iscsi/task.o 00:02:18.383 CC lib/vhost/rte_vhost_user.o 00:02:18.644 SO libspdk_ftl.so.9.0 00:02:18.905 SYMLINK libspdk_ftl.so 00:02:19.166 LIB libspdk_nvmf.a 00:02:19.427 SO libspdk_nvmf.so.19.0 00:02:19.427 LIB libspdk_vhost.a 00:02:19.427 SO libspdk_vhost.so.8.0 00:02:19.427 SYMLINK libspdk_nvmf.so 00:02:19.427 SYMLINK libspdk_vhost.so 00:02:19.687 LIB libspdk_iscsi.a 00:02:19.687 SO libspdk_iscsi.so.8.0 00:02:19.948 SYMLINK libspdk_iscsi.so 00:02:20.519 CC module/env_dpdk/env_dpdk_rpc.o 00:02:20.519 CC module/vfu_device/vfu_virtio.o 00:02:20.519 CC module/vfu_device/vfu_virtio_blk.o 00:02:20.519 CC module/vfu_device/vfu_virtio_scsi.o 00:02:20.519 CC module/vfu_device/vfu_virtio_rpc.o 00:02:20.519 CC module/accel/dsa/accel_dsa.o 00:02:20.519 CC module/accel/dsa/accel_dsa_rpc.o 00:02:20.519 CC module/accel/error/accel_error.o 00:02:20.519 CC module/accel/error/accel_error_rpc.o 00:02:20.519 CC module/scheduler/gscheduler/gscheduler.o 00:02:20.519 CC module/accel/ioat/accel_ioat.o 00:02:20.519 CC module/accel/iaa/accel_iaa.o 00:02:20.519 CC module/accel/ioat/accel_ioat_rpc.o 00:02:20.519 CC module/accel/iaa/accel_iaa_rpc.o 00:02:20.519 LIB libspdk_env_dpdk_rpc.a 00:02:20.519 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:20.519 CC module/blob/bdev/blob_bdev.o 00:02:20.519 CC module/sock/posix/posix.o 00:02:20.519 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:20.519 CC module/keyring/file/keyring_rpc.o 00:02:20.519 CC module/keyring/file/keyring.o 00:02:20.519 CC module/keyring/linux/keyring.o 00:02:20.519 CC module/keyring/linux/keyring_rpc.o 00:02:20.519 SO libspdk_env_dpdk_rpc.so.6.0 00:02:20.519 SYMLINK libspdk_env_dpdk_rpc.so 00:02:20.779 LIB libspdk_scheduler_gscheduler.a 00:02:20.779 LIB libspdk_keyring_linux.a 00:02:20.779 LIB libspdk_keyring_file.a 00:02:20.779 LIB libspdk_accel_error.a 00:02:20.779 LIB libspdk_scheduler_dpdk_governor.a 00:02:20.779 SO libspdk_keyring_linux.so.1.0 00:02:20.779 SO libspdk_scheduler_gscheduler.so.4.0 00:02:20.779 SO libspdk_accel_error.so.2.0 00:02:20.779 LIB libspdk_accel_ioat.a 00:02:20.779 SO libspdk_keyring_file.so.1.0 00:02:20.779 LIB libspdk_scheduler_dynamic.a 00:02:20.779 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:20.779 LIB libspdk_accel_iaa.a 00:02:20.779 LIB libspdk_accel_dsa.a 00:02:20.779 SO libspdk_scheduler_dynamic.so.4.0 00:02:20.779 SO libspdk_accel_ioat.so.6.0 00:02:20.779 SO libspdk_accel_iaa.so.3.0 00:02:20.779 SYMLINK libspdk_keyring_linux.so 00:02:20.779 SYMLINK libspdk_scheduler_gscheduler.so 00:02:20.779 SO libspdk_accel_dsa.so.5.0 00:02:20.779 SYMLINK libspdk_accel_error.so 00:02:20.779 LIB libspdk_blob_bdev.a 00:02:20.779 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:20.779 SYMLINK libspdk_keyring_file.so 00:02:20.779 SYMLINK libspdk_scheduler_dynamic.so 00:02:20.779 SO libspdk_blob_bdev.so.11.0 00:02:20.779 SYMLINK libspdk_accel_iaa.so 00:02:20.779 SYMLINK libspdk_accel_ioat.so 00:02:20.779 SYMLINK libspdk_accel_dsa.so 00:02:21.040 SYMLINK libspdk_blob_bdev.so 00:02:21.040 LIB libspdk_vfu_device.a 00:02:21.040 SO libspdk_vfu_device.so.3.0 00:02:21.040 SYMLINK libspdk_vfu_device.so 00:02:21.301 LIB libspdk_sock_posix.a 00:02:21.301 SO libspdk_sock_posix.so.6.0 00:02:21.301 SYMLINK libspdk_sock_posix.so 00:02:21.560 CC module/bdev/delay/vbdev_delay.o 00:02:21.560 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:21.560 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:21.560 CC module/blobfs/bdev/blobfs_bdev.o 00:02:21.560 CC module/bdev/aio/bdev_aio.o 00:02:21.560 CC module/bdev/aio/bdev_aio_rpc.o 00:02:21.560 CC module/bdev/gpt/gpt.o 00:02:21.560 CC module/bdev/lvol/vbdev_lvol.o 00:02:21.560 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:21.560 CC module/bdev/error/vbdev_error.o 00:02:21.560 CC module/bdev/error/vbdev_error_rpc.o 00:02:21.560 CC module/bdev/malloc/bdev_malloc.o 00:02:21.560 CC module/bdev/gpt/vbdev_gpt.o 00:02:21.560 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:21.560 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:21.560 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:21.560 CC module/bdev/split/vbdev_split.o 00:02:21.560 CC module/bdev/ftl/bdev_ftl.o 00:02:21.560 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:21.560 CC module/bdev/split/vbdev_split_rpc.o 00:02:21.560 CC module/bdev/raid/bdev_raid.o 00:02:21.560 CC module/bdev/raid/bdev_raid_rpc.o 00:02:21.560 CC module/bdev/passthru/vbdev_passthru.o 00:02:21.560 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:21.560 CC module/bdev/null/bdev_null.o 00:02:21.560 CC module/bdev/raid/bdev_raid_sb.o 00:02:21.560 CC module/bdev/null/bdev_null_rpc.o 00:02:21.560 CC module/bdev/nvme/bdev_nvme.o 00:02:21.560 CC module/bdev/raid/raid0.o 00:02:21.560 CC module/bdev/raid/raid1.o 00:02:21.560 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:21.560 CC module/bdev/raid/concat.o 00:02:21.560 CC module/bdev/nvme/nvme_rpc.o 00:02:21.560 CC module/bdev/nvme/bdev_mdns_client.o 00:02:21.560 CC module/bdev/nvme/vbdev_opal.o 00:02:21.560 CC module/bdev/iscsi/bdev_iscsi.o 00:02:21.560 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:21.560 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:21.560 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:21.560 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:21.560 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:21.560 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:21.820 LIB libspdk_blobfs_bdev.a 00:02:21.820 SO libspdk_blobfs_bdev.so.6.0 00:02:21.820 LIB libspdk_bdev_split.a 00:02:21.820 LIB libspdk_bdev_error.a 00:02:21.820 LIB libspdk_bdev_gpt.a 00:02:21.820 SYMLINK libspdk_blobfs_bdev.so 00:02:21.820 LIB libspdk_bdev_null.a 00:02:21.820 LIB libspdk_bdev_passthru.a 00:02:21.820 SO libspdk_bdev_gpt.so.6.0 00:02:21.820 SO libspdk_bdev_split.so.6.0 00:02:21.820 LIB libspdk_bdev_aio.a 00:02:21.820 SO libspdk_bdev_error.so.6.0 00:02:21.820 LIB libspdk_bdev_ftl.a 00:02:21.820 LIB libspdk_bdev_zone_block.a 00:02:21.820 SO libspdk_bdev_passthru.so.6.0 00:02:21.820 LIB libspdk_bdev_delay.a 00:02:21.820 SO libspdk_bdev_null.so.6.0 00:02:21.820 LIB libspdk_bdev_malloc.a 00:02:21.820 SO libspdk_bdev_aio.so.6.0 00:02:21.820 SO libspdk_bdev_ftl.so.6.0 00:02:21.820 SO libspdk_bdev_zone_block.so.6.0 00:02:21.820 LIB libspdk_bdev_iscsi.a 00:02:21.820 SYMLINK libspdk_bdev_gpt.so 00:02:21.820 SYMLINK libspdk_bdev_split.so 00:02:21.820 SO libspdk_bdev_malloc.so.6.0 00:02:21.820 SYMLINK libspdk_bdev_error.so 00:02:21.820 SO libspdk_bdev_delay.so.6.0 00:02:21.820 SYMLINK libspdk_bdev_passthru.so 00:02:22.079 SYMLINK libspdk_bdev_aio.so 00:02:22.079 SYMLINK libspdk_bdev_null.so 00:02:22.079 SO libspdk_bdev_iscsi.so.6.0 00:02:22.079 SYMLINK libspdk_bdev_zone_block.so 00:02:22.079 SYMLINK libspdk_bdev_ftl.so 00:02:22.079 LIB libspdk_bdev_lvol.a 00:02:22.079 SYMLINK libspdk_bdev_malloc.so 00:02:22.079 SYMLINK libspdk_bdev_delay.so 00:02:22.079 SO libspdk_bdev_lvol.so.6.0 00:02:22.079 LIB libspdk_bdev_virtio.a 00:02:22.079 SYMLINK libspdk_bdev_iscsi.so 00:02:22.079 SO libspdk_bdev_virtio.so.6.0 00:02:22.079 SYMLINK libspdk_bdev_lvol.so 00:02:22.079 SYMLINK libspdk_bdev_virtio.so 00:02:22.340 LIB libspdk_bdev_raid.a 00:02:22.340 SO libspdk_bdev_raid.so.6.0 00:02:22.598 SYMLINK libspdk_bdev_raid.so 00:02:23.539 LIB libspdk_bdev_nvme.a 00:02:23.539 SO libspdk_bdev_nvme.so.7.0 00:02:23.539 SYMLINK libspdk_bdev_nvme.so 00:02:24.112 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:24.112 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:24.112 CC module/event/subsystems/iobuf/iobuf.o 00:02:24.112 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:24.112 CC module/event/subsystems/scheduler/scheduler.o 00:02:24.112 CC module/event/subsystems/vmd/vmd.o 00:02:24.112 CC module/event/subsystems/sock/sock.o 00:02:24.112 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:24.112 CC module/event/subsystems/keyring/keyring.o 00:02:24.373 LIB libspdk_event_vhost_blk.a 00:02:24.373 LIB libspdk_event_vfu_tgt.a 00:02:24.373 LIB libspdk_event_keyring.a 00:02:24.373 LIB libspdk_event_vmd.a 00:02:24.373 LIB libspdk_event_scheduler.a 00:02:24.373 LIB libspdk_event_sock.a 00:02:24.373 LIB libspdk_event_iobuf.a 00:02:24.373 SO libspdk_event_vmd.so.6.0 00:02:24.373 SO libspdk_event_vhost_blk.so.3.0 00:02:24.373 SO libspdk_event_keyring.so.1.0 00:02:24.373 SO libspdk_event_vfu_tgt.so.3.0 00:02:24.373 SO libspdk_event_scheduler.so.4.0 00:02:24.373 SO libspdk_event_sock.so.5.0 00:02:24.373 SO libspdk_event_iobuf.so.3.0 00:02:24.644 SYMLINK libspdk_event_vhost_blk.so 00:02:24.644 SYMLINK libspdk_event_vmd.so 00:02:24.644 SYMLINK libspdk_event_vfu_tgt.so 00:02:24.644 SYMLINK libspdk_event_keyring.so 00:02:24.644 SYMLINK libspdk_event_sock.so 00:02:24.644 SYMLINK libspdk_event_scheduler.so 00:02:24.644 SYMLINK libspdk_event_iobuf.so 00:02:24.904 CC module/event/subsystems/accel/accel.o 00:02:24.904 LIB libspdk_event_accel.a 00:02:25.164 SO libspdk_event_accel.so.6.0 00:02:25.164 SYMLINK libspdk_event_accel.so 00:02:25.424 CC module/event/subsystems/bdev/bdev.o 00:02:25.685 LIB libspdk_event_bdev.a 00:02:25.685 SO libspdk_event_bdev.so.6.0 00:02:25.685 SYMLINK libspdk_event_bdev.so 00:02:26.256 CC module/event/subsystems/ublk/ublk.o 00:02:26.256 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:26.256 CC module/event/subsystems/scsi/scsi.o 00:02:26.256 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:26.256 CC module/event/subsystems/nbd/nbd.o 00:02:26.256 LIB libspdk_event_ublk.a 00:02:26.256 LIB libspdk_event_nbd.a 00:02:26.256 LIB libspdk_event_scsi.a 00:02:26.256 SO libspdk_event_ublk.so.3.0 00:02:26.256 SO libspdk_event_nbd.so.6.0 00:02:26.256 SO libspdk_event_scsi.so.6.0 00:02:26.256 LIB libspdk_event_nvmf.a 00:02:26.256 SYMLINK libspdk_event_ublk.so 00:02:26.256 SO libspdk_event_nvmf.so.6.0 00:02:26.256 SYMLINK libspdk_event_nbd.so 00:02:26.256 SYMLINK libspdk_event_scsi.so 00:02:26.517 SYMLINK libspdk_event_nvmf.so 00:02:26.778 CC module/event/subsystems/iscsi/iscsi.o 00:02:26.778 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:26.778 LIB libspdk_event_iscsi.a 00:02:26.778 LIB libspdk_event_vhost_scsi.a 00:02:27.038 SO libspdk_event_iscsi.so.6.0 00:02:27.038 SO libspdk_event_vhost_scsi.so.3.0 00:02:27.038 SYMLINK libspdk_event_iscsi.so 00:02:27.038 SYMLINK libspdk_event_vhost_scsi.so 00:02:27.038 SO libspdk.so.6.0 00:02:27.298 SYMLINK libspdk.so 00:02:27.560 CXX app/trace/trace.o 00:02:27.560 CC app/trace_record/trace_record.o 00:02:27.560 TEST_HEADER include/spdk/accel_module.h 00:02:27.560 CC app/spdk_top/spdk_top.o 00:02:27.560 TEST_HEADER include/spdk/accel.h 00:02:27.560 TEST_HEADER include/spdk/barrier.h 00:02:27.560 TEST_HEADER include/spdk/assert.h 00:02:27.560 TEST_HEADER include/spdk/base64.h 00:02:27.561 TEST_HEADER include/spdk/bdev.h 00:02:27.561 CC app/spdk_nvme_discover/discovery_aer.o 00:02:27.561 TEST_HEADER include/spdk/bdev_module.h 00:02:27.561 CC test/rpc_client/rpc_client_test.o 00:02:27.561 TEST_HEADER include/spdk/bdev_zone.h 00:02:27.561 CC app/spdk_nvme_identify/identify.o 00:02:27.561 CC app/spdk_lspci/spdk_lspci.o 00:02:27.561 TEST_HEADER include/spdk/bit_array.h 00:02:27.561 CC app/spdk_nvme_perf/perf.o 00:02:27.561 TEST_HEADER include/spdk/bit_pool.h 00:02:27.561 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:27.561 TEST_HEADER include/spdk/blob_bdev.h 00:02:27.561 TEST_HEADER include/spdk/blobfs.h 00:02:27.561 TEST_HEADER include/spdk/blob.h 00:02:27.561 TEST_HEADER include/spdk/conf.h 00:02:27.561 TEST_HEADER include/spdk/cpuset.h 00:02:27.561 TEST_HEADER include/spdk/config.h 00:02:27.561 TEST_HEADER include/spdk/crc16.h 00:02:27.561 TEST_HEADER include/spdk/crc32.h 00:02:27.561 TEST_HEADER include/spdk/dif.h 00:02:27.561 TEST_HEADER include/spdk/crc64.h 00:02:27.561 TEST_HEADER include/spdk/dma.h 00:02:27.561 TEST_HEADER include/spdk/endian.h 00:02:27.561 TEST_HEADER include/spdk/env_dpdk.h 00:02:27.561 TEST_HEADER include/spdk/env.h 00:02:27.561 TEST_HEADER include/spdk/event.h 00:02:27.561 TEST_HEADER include/spdk/fd_group.h 00:02:27.561 TEST_HEADER include/spdk/fd.h 00:02:27.561 CC app/spdk_dd/spdk_dd.o 00:02:27.561 TEST_HEADER include/spdk/file.h 00:02:27.561 TEST_HEADER include/spdk/gpt_spec.h 00:02:27.561 TEST_HEADER include/spdk/ftl.h 00:02:27.561 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:27.561 TEST_HEADER include/spdk/hexlify.h 00:02:27.561 TEST_HEADER include/spdk/histogram_data.h 00:02:27.561 TEST_HEADER include/spdk/init.h 00:02:27.561 TEST_HEADER include/spdk/idxd_spec.h 00:02:27.561 TEST_HEADER include/spdk/idxd.h 00:02:27.561 TEST_HEADER include/spdk/ioat_spec.h 00:02:27.561 CC app/nvmf_tgt/nvmf_main.o 00:02:27.561 TEST_HEADER include/spdk/ioat.h 00:02:27.561 TEST_HEADER include/spdk/iscsi_spec.h 00:02:27.561 TEST_HEADER include/spdk/jsonrpc.h 00:02:27.561 TEST_HEADER include/spdk/json.h 00:02:27.561 TEST_HEADER include/spdk/keyring.h 00:02:27.561 TEST_HEADER include/spdk/likely.h 00:02:27.561 TEST_HEADER include/spdk/keyring_module.h 00:02:27.561 CC app/iscsi_tgt/iscsi_tgt.o 00:02:27.561 TEST_HEADER include/spdk/log.h 00:02:27.561 TEST_HEADER include/spdk/lvol.h 00:02:27.561 TEST_HEADER include/spdk/memory.h 00:02:27.561 TEST_HEADER include/spdk/mmio.h 00:02:27.561 TEST_HEADER include/spdk/nbd.h 00:02:27.561 TEST_HEADER include/spdk/notify.h 00:02:27.561 TEST_HEADER include/spdk/nvme.h 00:02:27.561 TEST_HEADER include/spdk/nvme_intel.h 00:02:27.561 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:27.561 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:27.561 TEST_HEADER include/spdk/nvme_spec.h 00:02:27.561 TEST_HEADER include/spdk/nvme_zns.h 00:02:27.561 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:27.561 TEST_HEADER include/spdk/nvmf.h 00:02:27.561 CC app/spdk_tgt/spdk_tgt.o 00:02:27.561 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:27.561 TEST_HEADER include/spdk/nvmf_spec.h 00:02:27.561 TEST_HEADER include/spdk/nvmf_transport.h 00:02:27.561 TEST_HEADER include/spdk/opal.h 00:02:27.561 TEST_HEADER include/spdk/opal_spec.h 00:02:27.561 TEST_HEADER include/spdk/pci_ids.h 00:02:27.561 TEST_HEADER include/spdk/pipe.h 00:02:27.561 TEST_HEADER include/spdk/queue.h 00:02:27.561 TEST_HEADER include/spdk/rpc.h 00:02:27.561 TEST_HEADER include/spdk/reduce.h 00:02:27.561 TEST_HEADER include/spdk/scheduler.h 00:02:27.561 TEST_HEADER include/spdk/scsi.h 00:02:27.561 TEST_HEADER include/spdk/scsi_spec.h 00:02:27.561 TEST_HEADER include/spdk/sock.h 00:02:27.561 TEST_HEADER include/spdk/stdinc.h 00:02:27.561 TEST_HEADER include/spdk/thread.h 00:02:27.561 TEST_HEADER include/spdk/string.h 00:02:27.561 TEST_HEADER include/spdk/trace_parser.h 00:02:27.561 TEST_HEADER include/spdk/trace.h 00:02:27.561 TEST_HEADER include/spdk/util.h 00:02:27.561 TEST_HEADER include/spdk/tree.h 00:02:27.561 TEST_HEADER include/spdk/ublk.h 00:02:27.822 TEST_HEADER include/spdk/uuid.h 00:02:27.822 TEST_HEADER include/spdk/version.h 00:02:27.822 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:27.822 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:27.822 TEST_HEADER include/spdk/vhost.h 00:02:27.822 TEST_HEADER include/spdk/vmd.h 00:02:27.822 TEST_HEADER include/spdk/xor.h 00:02:27.822 TEST_HEADER include/spdk/zipf.h 00:02:27.822 CXX test/cpp_headers/accel_module.o 00:02:27.822 CXX test/cpp_headers/accel.o 00:02:27.822 CXX test/cpp_headers/assert.o 00:02:27.822 CXX test/cpp_headers/barrier.o 00:02:27.822 CXX test/cpp_headers/base64.o 00:02:27.822 CXX test/cpp_headers/bdev_module.o 00:02:27.822 CXX test/cpp_headers/bdev.o 00:02:27.822 CXX test/cpp_headers/bdev_zone.o 00:02:27.822 CXX test/cpp_headers/bit_array.o 00:02:27.822 CXX test/cpp_headers/bit_pool.o 00:02:27.822 CXX test/cpp_headers/blob_bdev.o 00:02:27.822 CXX test/cpp_headers/blobfs.o 00:02:27.822 CXX test/cpp_headers/blobfs_bdev.o 00:02:27.822 CXX test/cpp_headers/blob.o 00:02:27.822 CXX test/cpp_headers/config.o 00:02:27.822 CXX test/cpp_headers/cpuset.o 00:02:27.822 CXX test/cpp_headers/conf.o 00:02:27.822 CXX test/cpp_headers/crc32.o 00:02:27.822 CXX test/cpp_headers/crc64.o 00:02:27.822 CXX test/cpp_headers/crc16.o 00:02:27.822 CXX test/cpp_headers/dif.o 00:02:27.822 CXX test/cpp_headers/dma.o 00:02:27.822 CXX test/cpp_headers/endian.o 00:02:27.822 CXX test/cpp_headers/env.o 00:02:27.822 CXX test/cpp_headers/file.o 00:02:27.822 CXX test/cpp_headers/env_dpdk.o 00:02:27.822 CXX test/cpp_headers/ftl.o 00:02:27.822 CXX test/cpp_headers/event.o 00:02:27.822 CXX test/cpp_headers/fd_group.o 00:02:27.822 CXX test/cpp_headers/fd.o 00:02:27.822 CXX test/cpp_headers/histogram_data.o 00:02:27.822 CXX test/cpp_headers/hexlify.o 00:02:27.822 CXX test/cpp_headers/gpt_spec.o 00:02:27.822 CXX test/cpp_headers/idxd_spec.o 00:02:27.822 CXX test/cpp_headers/idxd.o 00:02:27.822 CXX test/cpp_headers/ioat.o 00:02:27.822 CXX test/cpp_headers/init.o 00:02:27.822 CXX test/cpp_headers/ioat_spec.o 00:02:27.822 CXX test/cpp_headers/json.o 00:02:27.822 CXX test/cpp_headers/jsonrpc.o 00:02:27.822 CXX test/cpp_headers/iscsi_spec.o 00:02:27.822 CXX test/cpp_headers/keyring.o 00:02:27.822 CXX test/cpp_headers/keyring_module.o 00:02:27.822 CXX test/cpp_headers/log.o 00:02:27.822 CXX test/cpp_headers/lvol.o 00:02:27.822 CXX test/cpp_headers/likely.o 00:02:27.822 CXX test/cpp_headers/memory.o 00:02:27.822 CXX test/cpp_headers/mmio.o 00:02:27.822 CXX test/cpp_headers/nvme.o 00:02:27.822 CXX test/cpp_headers/nbd.o 00:02:27.822 CXX test/cpp_headers/nvme_intel.o 00:02:27.822 CXX test/cpp_headers/notify.o 00:02:27.822 CXX test/cpp_headers/nvme_spec.o 00:02:27.822 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:27.822 CXX test/cpp_headers/nvme_ocssd.o 00:02:27.822 CXX test/cpp_headers/nvme_zns.o 00:02:27.822 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:27.822 CXX test/cpp_headers/nvmf.o 00:02:27.822 CXX test/cpp_headers/nvmf_cmd.o 00:02:27.822 CXX test/cpp_headers/nvmf_spec.o 00:02:27.822 LINK spdk_lspci 00:02:27.822 CXX test/cpp_headers/opal_spec.o 00:02:27.822 CXX test/cpp_headers/nvmf_transport.o 00:02:27.822 CXX test/cpp_headers/opal.o 00:02:27.822 CXX test/cpp_headers/queue.o 00:02:27.822 CXX test/cpp_headers/pci_ids.o 00:02:27.822 CXX test/cpp_headers/pipe.o 00:02:27.822 CXX test/cpp_headers/reduce.o 00:02:27.822 CXX test/cpp_headers/rpc.o 00:02:27.822 CXX test/cpp_headers/scheduler.o 00:02:27.822 CXX test/cpp_headers/scsi.o 00:02:27.822 CXX test/cpp_headers/scsi_spec.o 00:02:27.822 CXX test/cpp_headers/sock.o 00:02:27.822 CC examples/ioat/perf/perf.o 00:02:27.822 CXX test/cpp_headers/stdinc.o 00:02:27.822 CXX test/cpp_headers/trace.o 00:02:27.822 CXX test/cpp_headers/string.o 00:02:27.822 CXX test/cpp_headers/thread.o 00:02:27.822 CXX test/cpp_headers/ublk.o 00:02:27.822 CXX test/cpp_headers/trace_parser.o 00:02:27.822 CXX test/cpp_headers/util.o 00:02:27.822 CXX test/cpp_headers/tree.o 00:02:27.822 CC test/app/jsoncat/jsoncat.o 00:02:27.822 CXX test/cpp_headers/uuid.o 00:02:27.822 CXX test/cpp_headers/version.o 00:02:27.822 CC test/env/vtophys/vtophys.o 00:02:27.822 CXX test/cpp_headers/vfio_user_pci.o 00:02:27.822 CXX test/cpp_headers/vfio_user_spec.o 00:02:27.822 CXX test/cpp_headers/vmd.o 00:02:27.822 CXX test/cpp_headers/vhost.o 00:02:27.822 CXX test/cpp_headers/xor.o 00:02:27.822 CXX test/cpp_headers/zipf.o 00:02:27.822 CC examples/ioat/verify/verify.o 00:02:27.822 CC test/app/histogram_perf/histogram_perf.o 00:02:27.822 CC examples/util/zipf/zipf.o 00:02:27.822 CC app/fio/nvme/fio_plugin.o 00:02:27.822 CC test/env/memory/memory_ut.o 00:02:27.822 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:27.822 CC test/thread/poller_perf/poller_perf.o 00:02:27.822 CC test/app/stub/stub.o 00:02:27.822 CC test/env/pci/pci_ut.o 00:02:27.822 CC test/dma/test_dma/test_dma.o 00:02:28.090 LINK spdk_nvme_discover 00:02:28.090 CC test/app/bdev_svc/bdev_svc.o 00:02:28.090 CC app/fio/bdev/fio_plugin.o 00:02:28.090 LINK rpc_client_test 00:02:28.090 LINK nvmf_tgt 00:02:28.090 LINK spdk_trace_record 00:02:28.090 LINK spdk_tgt 00:02:28.090 LINK interrupt_tgt 00:02:28.090 LINK iscsi_tgt 00:02:28.351 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:28.351 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:28.351 LINK spdk_trace 00:02:28.351 CC test/env/mem_callbacks/mem_callbacks.o 00:02:28.351 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:28.351 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:28.351 LINK spdk_dd 00:02:28.351 LINK stub 00:02:28.351 LINK jsoncat 00:02:28.611 LINK histogram_perf 00:02:28.611 LINK vtophys 00:02:28.611 LINK zipf 00:02:28.611 LINK env_dpdk_post_init 00:02:28.611 LINK verify 00:02:28.611 LINK poller_perf 00:02:28.611 LINK bdev_svc 00:02:28.611 LINK ioat_perf 00:02:28.611 LINK test_dma 00:02:28.870 LINK spdk_top 00:02:28.870 CC app/vhost/vhost.o 00:02:28.870 LINK pci_ut 00:02:28.870 LINK spdk_bdev 00:02:28.870 LINK nvme_fuzz 00:02:28.870 LINK spdk_nvme 00:02:28.870 LINK spdk_nvme_perf 00:02:28.870 LINK vhost_fuzz 00:02:28.870 CC examples/idxd/perf/perf.o 00:02:28.870 CC examples/sock/hello_world/hello_sock.o 00:02:28.870 LINK spdk_nvme_identify 00:02:28.870 CC examples/vmd/lsvmd/lsvmd.o 00:02:28.870 CC examples/vmd/led/led.o 00:02:28.870 LINK vhost 00:02:29.129 LINK mem_callbacks 00:02:29.129 CC test/event/reactor_perf/reactor_perf.o 00:02:29.129 CC examples/thread/thread/thread_ex.o 00:02:29.129 CC test/event/reactor/reactor.o 00:02:29.129 CC test/event/event_perf/event_perf.o 00:02:29.129 CC test/event/scheduler/scheduler.o 00:02:29.130 CC test/event/app_repeat/app_repeat.o 00:02:29.130 LINK lsvmd 00:02:29.130 LINK led 00:02:29.130 CC test/nvme/sgl/sgl.o 00:02:29.130 CC test/nvme/simple_copy/simple_copy.o 00:02:29.130 CC test/nvme/fused_ordering/fused_ordering.o 00:02:29.130 LINK reactor_perf 00:02:29.130 CC test/nvme/aer/aer.o 00:02:29.130 LINK reactor 00:02:29.130 CC test/nvme/reserve/reserve.o 00:02:29.130 CC test/nvme/cuse/cuse.o 00:02:29.130 CC test/nvme/connect_stress/connect_stress.o 00:02:29.130 CC test/nvme/fdp/fdp.o 00:02:29.130 CC test/nvme/startup/startup.o 00:02:29.130 CC test/nvme/e2edp/nvme_dp.o 00:02:29.130 CC test/nvme/err_injection/err_injection.o 00:02:29.130 CC test/nvme/reset/reset.o 00:02:29.130 CC test/nvme/compliance/nvme_compliance.o 00:02:29.130 CC test/nvme/boot_partition/boot_partition.o 00:02:29.130 CC test/accel/dif/dif.o 00:02:29.130 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:29.130 CC test/nvme/overhead/overhead.o 00:02:29.130 LINK event_perf 00:02:29.130 CC test/blobfs/mkfs/mkfs.o 00:02:29.130 LINK hello_sock 00:02:29.130 LINK app_repeat 00:02:29.390 LINK idxd_perf 00:02:29.390 LINK scheduler 00:02:29.390 CC test/lvol/esnap/esnap.o 00:02:29.390 LINK thread 00:02:29.390 LINK memory_ut 00:02:29.390 LINK connect_stress 00:02:29.390 LINK boot_partition 00:02:29.390 LINK fused_ordering 00:02:29.390 LINK doorbell_aers 00:02:29.390 LINK startup 00:02:29.390 LINK err_injection 00:02:29.390 LINK simple_copy 00:02:29.390 LINK sgl 00:02:29.390 LINK reserve 00:02:29.390 LINK mkfs 00:02:29.390 LINK aer 00:02:29.390 LINK reset 00:02:29.390 LINK nvme_dp 00:02:29.390 LINK overhead 00:02:29.390 LINK nvme_compliance 00:02:29.649 LINK fdp 00:02:29.649 LINK dif 00:02:29.649 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:29.649 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:29.649 CC examples/nvme/hello_world/hello_world.o 00:02:29.649 CC examples/nvme/arbitration/arbitration.o 00:02:29.649 CC examples/nvme/hotplug/hotplug.o 00:02:29.649 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:29.649 CC examples/nvme/reconnect/reconnect.o 00:02:29.649 CC examples/nvme/abort/abort.o 00:02:29.649 LINK iscsi_fuzz 00:02:29.910 CC examples/accel/perf/accel_perf.o 00:02:29.910 CC examples/blob/cli/blobcli.o 00:02:29.910 CC examples/blob/hello_world/hello_blob.o 00:02:29.910 LINK cmb_copy 00:02:29.910 LINK pmr_persistence 00:02:29.910 LINK hello_world 00:02:29.910 LINK hotplug 00:02:30.224 LINK arbitration 00:02:30.224 LINK abort 00:02:30.224 LINK reconnect 00:02:30.224 LINK hello_blob 00:02:30.224 LINK nvme_manage 00:02:30.224 CC test/bdev/bdevio/bdevio.o 00:02:30.224 LINK accel_perf 00:02:30.224 LINK cuse 00:02:30.484 LINK blobcli 00:02:30.484 LINK bdevio 00:02:30.745 CC examples/bdev/hello_world/hello_bdev.o 00:02:30.745 CC examples/bdev/bdevperf/bdevperf.o 00:02:31.006 LINK hello_bdev 00:02:31.576 LINK bdevperf 00:02:32.148 CC examples/nvmf/nvmf/nvmf.o 00:02:32.407 LINK nvmf 00:02:33.350 LINK esnap 00:02:33.922 00:02:33.922 real 0m51.070s 00:02:33.922 user 6m33.691s 00:02:33.922 sys 4m32.413s 00:02:33.922 21:59:59 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:33.922 21:59:59 make -- common/autotest_common.sh@10 -- $ set +x 00:02:33.922 ************************************ 00:02:33.922 END TEST make 00:02:33.922 ************************************ 00:02:33.922 21:59:59 -- common/autotest_common.sh@1142 -- $ return 0 00:02:33.922 21:59:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:33.922 21:59:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:33.922 21:59:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:33.922 21:59:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.922 21:59:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:33.922 21:59:59 -- pm/common@44 -- $ pid=2432035 00:02:33.922 21:59:59 -- pm/common@50 -- $ kill -TERM 2432035 00:02:33.922 21:59:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.922 21:59:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:33.922 21:59:59 -- pm/common@44 -- $ pid=2432036 00:02:33.922 21:59:59 -- pm/common@50 -- $ kill -TERM 2432036 00:02:33.922 21:59:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.922 21:59:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:33.922 21:59:59 -- pm/common@44 -- $ pid=2432038 00:02:33.922 21:59:59 -- pm/common@50 -- $ kill -TERM 2432038 00:02:33.922 21:59:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.922 21:59:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:33.922 21:59:59 -- pm/common@44 -- $ pid=2432064 00:02:33.922 21:59:59 -- pm/common@50 -- $ sudo -E kill -TERM 2432064 00:02:33.922 21:59:59 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:33.922 21:59:59 -- nvmf/common.sh@7 -- # uname -s 00:02:33.922 21:59:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:33.922 21:59:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:33.922 21:59:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:33.922 21:59:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:33.922 21:59:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:33.922 21:59:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:33.922 21:59:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:33.922 21:59:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:33.922 21:59:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:33.923 21:59:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:33.923 21:59:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:33.923 21:59:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:33.923 21:59:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:33.923 21:59:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:33.923 21:59:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:33.923 21:59:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:33.923 21:59:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:33.923 21:59:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:33.923 21:59:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:33.923 21:59:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:33.923 21:59:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.923 21:59:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.923 21:59:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.923 21:59:59 -- paths/export.sh@5 -- # export PATH 00:02:33.923 21:59:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.923 21:59:59 -- nvmf/common.sh@47 -- # : 0 00:02:33.923 21:59:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:33.923 21:59:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:33.923 21:59:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:33.923 21:59:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:33.923 21:59:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:33.923 21:59:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:33.923 21:59:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:33.923 21:59:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:34.183 21:59:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:34.183 21:59:59 -- spdk/autotest.sh@32 -- # uname -s 00:02:34.183 21:59:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:34.183 21:59:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:34.183 21:59:59 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:34.183 21:59:59 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:34.183 21:59:59 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:34.183 21:59:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:34.183 21:59:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:34.183 21:59:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:34.183 21:59:59 -- spdk/autotest.sh@48 -- # udevadm_pid=2495155 00:02:34.183 21:59:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:34.183 21:59:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:34.183 21:59:59 -- pm/common@17 -- # local monitor 00:02:34.183 21:59:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.183 21:59:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.183 21:59:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.183 21:59:59 -- pm/common@21 -- # date +%s 00:02:34.183 21:59:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.183 21:59:59 -- pm/common@21 -- # date +%s 00:02:34.183 21:59:59 -- pm/common@25 -- # sleep 1 00:02:34.183 21:59:59 -- pm/common@21 -- # date +%s 00:02:34.183 21:59:59 -- pm/common@21 -- # date +%s 00:02:34.183 21:59:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721073599 00:02:34.183 21:59:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721073599 00:02:34.183 21:59:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721073599 00:02:34.183 21:59:59 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721073599 00:02:34.183 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721073599_collect-vmstat.pm.log 00:02:34.183 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721073599_collect-cpu-temp.pm.log 00:02:34.183 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721073599_collect-cpu-load.pm.log 00:02:34.183 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721073599_collect-bmc-pm.bmc.pm.log 00:02:35.127 22:00:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:35.127 22:00:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:35.127 22:00:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:35.127 22:00:00 -- common/autotest_common.sh@10 -- # set +x 00:02:35.127 22:00:00 -- spdk/autotest.sh@59 -- # create_test_list 00:02:35.127 22:00:00 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:35.127 22:00:00 -- common/autotest_common.sh@10 -- # set +x 00:02:35.127 22:00:00 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:35.127 22:00:00 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.127 22:00:00 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.127 22:00:00 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:35.127 22:00:00 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.127 22:00:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:35.127 22:00:00 -- common/autotest_common.sh@1455 -- # uname 00:02:35.127 22:00:00 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:35.127 22:00:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:35.127 22:00:00 -- common/autotest_common.sh@1475 -- # uname 00:02:35.127 22:00:00 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:35.127 22:00:00 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:35.127 22:00:00 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:35.127 22:00:00 -- spdk/autotest.sh@72 -- # hash lcov 00:02:35.127 22:00:00 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:35.127 22:00:00 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:35.127 --rc lcov_branch_coverage=1 00:02:35.127 --rc lcov_function_coverage=1 00:02:35.127 --rc genhtml_branch_coverage=1 00:02:35.127 --rc genhtml_function_coverage=1 00:02:35.127 --rc genhtml_legend=1 00:02:35.127 --rc geninfo_all_blocks=1 00:02:35.127 ' 00:02:35.127 22:00:00 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:35.127 --rc lcov_branch_coverage=1 00:02:35.127 --rc lcov_function_coverage=1 00:02:35.127 --rc genhtml_branch_coverage=1 00:02:35.127 --rc genhtml_function_coverage=1 00:02:35.127 --rc genhtml_legend=1 00:02:35.127 --rc geninfo_all_blocks=1 00:02:35.127 ' 00:02:35.127 22:00:00 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:35.127 --rc lcov_branch_coverage=1 00:02:35.127 --rc lcov_function_coverage=1 00:02:35.127 --rc genhtml_branch_coverage=1 00:02:35.127 --rc genhtml_function_coverage=1 00:02:35.127 --rc genhtml_legend=1 00:02:35.127 --rc geninfo_all_blocks=1 00:02:35.127 --no-external' 00:02:35.127 22:00:00 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:35.127 --rc lcov_branch_coverage=1 00:02:35.127 --rc lcov_function_coverage=1 00:02:35.127 --rc genhtml_branch_coverage=1 00:02:35.127 --rc genhtml_function_coverage=1 00:02:35.127 --rc genhtml_legend=1 00:02:35.127 --rc geninfo_all_blocks=1 00:02:35.127 --no-external' 00:02:35.127 22:00:00 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:35.127 lcov: LCOV version 1.14 00:02:35.127 22:00:00 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:40.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:40.420 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:40.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:40.421 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:40.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:40.422 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:40.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:40.422 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:40.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:40.422 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:40.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:40.422 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:40.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:40.422 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:40.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:40.422 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:40.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:40.422 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:40.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:40.422 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:40.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:40.422 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:58.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:58.538 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:05.117 22:00:30 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:05.117 22:00:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:05.117 22:00:30 -- common/autotest_common.sh@10 -- # set +x 00:03:05.117 22:00:30 -- spdk/autotest.sh@91 -- # rm -f 00:03:05.117 22:00:30 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.449 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:08.449 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:08.449 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:08.449 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:08.449 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:08.449 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:08.449 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:08.449 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:08.449 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:08.449 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:08.449 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:08.449 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:08.449 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:08.449 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:08.449 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:08.449 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:08.449 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:08.710 22:00:33 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:08.710 22:00:33 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:08.710 22:00:33 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:08.710 22:00:33 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:08.710 22:00:33 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:08.710 22:00:33 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:08.710 22:00:33 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:08.710 22:00:33 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.710 22:00:33 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:08.710 22:00:33 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:08.710 22:00:33 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:08.710 22:00:33 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:08.710 22:00:33 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:08.710 22:00:33 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:08.710 22:00:33 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:08.710 No valid GPT data, bailing 00:03:08.710 22:00:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:08.710 22:00:34 -- scripts/common.sh@391 -- # pt= 00:03:08.710 22:00:34 -- scripts/common.sh@392 -- # return 1 00:03:08.710 22:00:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:08.710 1+0 records in 00:03:08.710 1+0 records out 00:03:08.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00212602 s, 493 MB/s 00:03:08.710 22:00:34 -- spdk/autotest.sh@118 -- # sync 00:03:08.970 22:00:34 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:08.970 22:00:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:08.970 22:00:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:17.104 22:00:41 -- spdk/autotest.sh@124 -- # uname -s 00:03:17.104 22:00:41 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:17.104 22:00:41 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:17.104 22:00:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.104 22:00:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.104 22:00:41 -- common/autotest_common.sh@10 -- # set +x 00:03:17.104 ************************************ 00:03:17.104 START TEST setup.sh 00:03:17.104 ************************************ 00:03:17.104 22:00:41 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:17.104 * Looking for test storage... 00:03:17.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:17.104 22:00:41 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:17.104 22:00:41 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:17.104 22:00:41 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:17.104 22:00:41 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.104 22:00:41 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.104 22:00:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:17.104 ************************************ 00:03:17.104 START TEST acl 00:03:17.104 ************************************ 00:03:17.104 22:00:41 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:17.104 * Looking for test storage... 00:03:17.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:17.104 22:00:42 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:17.104 22:00:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:17.104 22:00:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:17.104 22:00:42 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:17.104 22:00:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:17.104 22:00:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:17.104 22:00:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:17.104 22:00:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:17.104 22:00:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:17.104 22:00:42 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:17.104 22:00:42 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:17.104 22:00:42 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:17.104 22:00:42 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:17.104 22:00:42 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:17.104 22:00:42 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:17.104 22:00:42 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.309 22:00:45 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:21.309 22:00:46 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:21.309 22:00:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.309 22:00:46 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:21.309 22:00:46 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.309 22:00:46 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:24.610 Hugepages 00:03:24.610 node hugesize free / total 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.610 00:03:24.610 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.610 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:24.611 22:00:49 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:24.611 22:00:49 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.611 22:00:49 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.611 22:00:49 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:24.611 ************************************ 00:03:24.611 START TEST denied 00:03:24.611 ************************************ 00:03:24.611 22:00:49 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:24.611 22:00:49 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:24.611 22:00:49 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:24.611 22:00:49 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:24.611 22:00:49 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.611 22:00:49 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:28.811 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:28.811 22:00:53 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:28.811 22:00:53 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:28.811 22:00:53 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:28.811 22:00:53 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:28.811 22:00:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:28.811 22:00:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:28.811 22:00:53 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:28.811 22:00:53 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:28.811 22:00:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:28.811 22:00:53 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.014 00:03:33.014 real 0m8.643s 00:03:33.014 user 0m2.869s 00:03:33.014 sys 0m5.092s 00:03:33.014 22:00:58 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.014 22:00:58 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:33.014 ************************************ 00:03:33.014 END TEST denied 00:03:33.014 ************************************ 00:03:33.014 22:00:58 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:33.014 22:00:58 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:33.014 22:00:58 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.014 22:00:58 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.014 22:00:58 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:33.014 ************************************ 00:03:33.014 START TEST allowed 00:03:33.014 ************************************ 00:03:33.014 22:00:58 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:33.014 22:00:58 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:33.014 22:00:58 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:33.014 22:00:58 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:33.014 22:00:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.014 22:00:58 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:38.392 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:38.392 22:01:03 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:38.393 22:01:03 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:38.393 22:01:03 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:38.393 22:01:03 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.393 22:01:03 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.620 00:03:42.620 real 0m9.047s 00:03:42.620 user 0m2.603s 00:03:42.620 sys 0m4.669s 00:03:42.620 22:01:07 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.620 22:01:07 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:42.620 ************************************ 00:03:42.620 END TEST allowed 00:03:42.620 ************************************ 00:03:42.620 22:01:07 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:42.620 00:03:42.620 real 0m25.401s 00:03:42.620 user 0m8.343s 00:03:42.620 sys 0m14.783s 00:03:42.620 22:01:07 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.620 22:01:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.620 ************************************ 00:03:42.620 END TEST acl 00:03:42.620 ************************************ 00:03:42.620 22:01:07 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:42.620 22:01:07 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:42.620 22:01:07 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.620 22:01:07 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.620 22:01:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:42.620 ************************************ 00:03:42.620 START TEST hugepages 00:03:42.620 ************************************ 00:03:42.620 22:01:07 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:42.620 * Looking for test storage... 00:03:42.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 102761976 kB' 'MemAvailable: 106249972 kB' 'Buffers: 2704 kB' 'Cached: 14483236 kB' 'SwapCached: 0 kB' 'Active: 11528252 kB' 'Inactive: 3523448 kB' 'Active(anon): 11054068 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569216 kB' 'Mapped: 186736 kB' 'Shmem: 10488308 kB' 'KReclaimable: 531308 kB' 'Slab: 1401980 kB' 'SReclaimable: 531308 kB' 'SUnreclaim: 870672 kB' 'KernelStack: 27264 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460892 kB' 'Committed_AS: 12633512 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.620 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.621 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:42.622 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:42.623 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:42.623 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.623 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.623 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.623 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.623 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:42.623 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.623 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.623 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.623 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.623 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:42.623 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:42.623 22:01:07 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:42.623 22:01:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.623 22:01:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.623 22:01:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.623 ************************************ 00:03:42.623 START TEST default_setup 00:03:42.623 ************************************ 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.623 22:01:07 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:45.923 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:45.923 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:45.923 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:45.923 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:45.923 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:45.923 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:45.923 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:45.923 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:45.923 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:45.923 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:45.923 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:45.923 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:45.923 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:45.923 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:45.923 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:45.923 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:45.923 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104958872 kB' 'MemAvailable: 108446836 kB' 'Buffers: 2704 kB' 'Cached: 14483356 kB' 'SwapCached: 0 kB' 'Active: 11543824 kB' 'Inactive: 3523448 kB' 'Active(anon): 11069640 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584632 kB' 'Mapped: 187308 kB' 'Shmem: 10488428 kB' 'KReclaimable: 531276 kB' 'Slab: 1400412 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 869136 kB' 'KernelStack: 27344 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12646960 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.188 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:46.189 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104958880 kB' 'MemAvailable: 108446844 kB' 'Buffers: 2704 kB' 'Cached: 14483360 kB' 'SwapCached: 0 kB' 'Active: 11543260 kB' 'Inactive: 3523448 kB' 'Active(anon): 11069076 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584092 kB' 'Mapped: 186952 kB' 'Shmem: 10488432 kB' 'KReclaimable: 531276 kB' 'Slab: 1400408 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 869132 kB' 'KernelStack: 27312 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12646980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235284 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.190 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104957872 kB' 'MemAvailable: 108445836 kB' 'Buffers: 2704 kB' 'Cached: 14483376 kB' 'SwapCached: 0 kB' 'Active: 11543348 kB' 'Inactive: 3523448 kB' 'Active(anon): 11069164 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584132 kB' 'Mapped: 186952 kB' 'Shmem: 10488448 kB' 'KReclaimable: 531276 kB' 'Slab: 1400408 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 869132 kB' 'KernelStack: 27312 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12647000 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235300 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.191 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.192 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.193 nr_hugepages=1024 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.193 resv_hugepages=0 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.193 surplus_hugepages=0 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.193 anon_hugepages=0 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104958652 kB' 'MemAvailable: 108446616 kB' 'Buffers: 2704 kB' 'Cached: 14483416 kB' 'SwapCached: 0 kB' 'Active: 11542996 kB' 'Inactive: 3523448 kB' 'Active(anon): 11068812 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583740 kB' 'Mapped: 186952 kB' 'Shmem: 10488488 kB' 'KReclaimable: 531276 kB' 'Slab: 1400408 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 869132 kB' 'KernelStack: 27296 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12647024 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235316 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.194 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52897664 kB' 'MemUsed: 12761344 kB' 'SwapCached: 0 kB' 'Active: 5088800 kB' 'Inactive: 3298656 kB' 'Active(anon): 4936240 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3298656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8096424 kB' 'Mapped: 105184 kB' 'AnonPages: 294252 kB' 'Shmem: 4645208 kB' 'KernelStack: 15688 kB' 'PageTables: 4896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 398624 kB' 'Slab: 913908 kB' 'SReclaimable: 398624 kB' 'SUnreclaim: 515284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.196 node0=1024 expecting 1024 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.196 00:03:46.196 real 0m3.835s 00:03:46.196 user 0m1.447s 00:03:46.196 sys 0m2.356s 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.196 22:01:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:46.196 ************************************ 00:03:46.196 END TEST default_setup 00:03:46.196 ************************************ 00:03:46.457 22:01:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:46.457 22:01:11 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:46.457 22:01:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.457 22:01:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.457 22:01:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.457 ************************************ 00:03:46.457 START TEST per_node_1G_alloc 00:03:46.457 ************************************ 00:03:46.457 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:46.457 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:46.457 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:46.457 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:46.457 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:46.457 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:46.457 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:46.457 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.457 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.458 22:01:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.767 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:49.767 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:49.767 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:49.767 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:49.767 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:49.767 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:49.767 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:49.767 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:49.767 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:49.767 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:49.767 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:49.767 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:49.767 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:49.767 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:49.767 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:49.767 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:49.767 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:49.767 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:49.767 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:49.767 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:49.767 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.767 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.767 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:49.767 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:49.767 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:49.767 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.767 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.767 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.767 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.767 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.767 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.767 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.767 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104988696 kB' 'MemAvailable: 108476660 kB' 'Buffers: 2704 kB' 'Cached: 14483516 kB' 'SwapCached: 0 kB' 'Active: 11543000 kB' 'Inactive: 3523448 kB' 'Active(anon): 11068816 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583100 kB' 'Mapped: 186068 kB' 'Shmem: 10488588 kB' 'KReclaimable: 531276 kB' 'Slab: 1400344 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 869068 kB' 'KernelStack: 27296 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12639752 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.768 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104989956 kB' 'MemAvailable: 108477920 kB' 'Buffers: 2704 kB' 'Cached: 14483520 kB' 'SwapCached: 0 kB' 'Active: 11542708 kB' 'Inactive: 3523448 kB' 'Active(anon): 11068524 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583236 kB' 'Mapped: 185980 kB' 'Shmem: 10488592 kB' 'KReclaimable: 531276 kB' 'Slab: 1400360 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 869084 kB' 'KernelStack: 27280 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12639772 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.769 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.770 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104990096 kB' 'MemAvailable: 108478060 kB' 'Buffers: 2704 kB' 'Cached: 14483536 kB' 'SwapCached: 0 kB' 'Active: 11542744 kB' 'Inactive: 3523448 kB' 'Active(anon): 11068560 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583292 kB' 'Mapped: 185980 kB' 'Shmem: 10488608 kB' 'KReclaimable: 531276 kB' 'Slab: 1400360 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 869084 kB' 'KernelStack: 27296 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12642648 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.771 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.772 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:49.773 nr_hugepages=1024 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.773 resv_hugepages=0 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.773 surplus_hugepages=0 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.773 anon_hugepages=0 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104988100 kB' 'MemAvailable: 108476064 kB' 'Buffers: 2704 kB' 'Cached: 14483560 kB' 'SwapCached: 0 kB' 'Active: 11542940 kB' 'Inactive: 3523448 kB' 'Active(anon): 11068756 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583372 kB' 'Mapped: 185980 kB' 'Shmem: 10488632 kB' 'KReclaimable: 531276 kB' 'Slab: 1400360 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 869084 kB' 'KernelStack: 27248 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12641056 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.773 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.774 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53948924 kB' 'MemUsed: 11710084 kB' 'SwapCached: 0 kB' 'Active: 5088676 kB' 'Inactive: 3298656 kB' 'Active(anon): 4936116 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3298656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8096540 kB' 'Mapped: 104732 kB' 'AnonPages: 293960 kB' 'Shmem: 4645324 kB' 'KernelStack: 15640 kB' 'PageTables: 4892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 398624 kB' 'Slab: 913608 kB' 'SReclaimable: 398624 kB' 'SUnreclaim: 514984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.775 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.776 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.776 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.776 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.776 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.776 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.776 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.776 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.776 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.776 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.776 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51038352 kB' 'MemUsed: 9641520 kB' 'SwapCached: 0 kB' 'Active: 6454188 kB' 'Inactive: 224792 kB' 'Active(anon): 6132564 kB' 'Inactive(anon): 0 kB' 'Active(file): 321624 kB' 'Inactive(file): 224792 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6389748 kB' 'Mapped: 81248 kB' 'AnonPages: 289304 kB' 'Shmem: 5843332 kB' 'KernelStack: 11592 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132652 kB' 'Slab: 486752 kB' 'SReclaimable: 132652 kB' 'SUnreclaim: 354100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:50.040 node0=512 expecting 512 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:50.040 node1=512 expecting 512 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:50.040 00:03:50.040 real 0m3.570s 00:03:50.040 user 0m1.378s 00:03:50.040 sys 0m2.221s 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.040 22:01:15 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:50.040 ************************************ 00:03:50.040 END TEST per_node_1G_alloc 00:03:50.040 ************************************ 00:03:50.040 22:01:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:50.040 22:01:15 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:50.040 22:01:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.040 22:01:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.040 22:01:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:50.040 ************************************ 00:03:50.040 START TEST even_2G_alloc 00:03:50.040 ************************************ 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.040 22:01:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:53.342 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:53.342 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:53.342 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:53.342 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:53.342 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:53.342 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:53.342 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:53.342 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:53.342 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:53.342 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:53.342 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:53.342 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:53.342 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:53.342 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:53.342 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:53.342 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:53.342 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104997040 kB' 'MemAvailable: 108485004 kB' 'Buffers: 2704 kB' 'Cached: 14483696 kB' 'SwapCached: 0 kB' 'Active: 11545252 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071068 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585052 kB' 'Mapped: 186116 kB' 'Shmem: 10488768 kB' 'KReclaimable: 531276 kB' 'Slab: 1400500 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 869224 kB' 'KernelStack: 27568 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12643736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235716 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.606 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104997216 kB' 'MemAvailable: 108485180 kB' 'Buffers: 2704 kB' 'Cached: 14483700 kB' 'SwapCached: 0 kB' 'Active: 11544448 kB' 'Inactive: 3523448 kB' 'Active(anon): 11070264 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585252 kB' 'Mapped: 186500 kB' 'Shmem: 10488772 kB' 'KReclaimable: 531276 kB' 'Slab: 1400476 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 869200 kB' 'KernelStack: 27520 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12644844 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235700 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.607 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.608 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104993804 kB' 'MemAvailable: 108481768 kB' 'Buffers: 2704 kB' 'Cached: 14483716 kB' 'SwapCached: 0 kB' 'Active: 11547464 kB' 'Inactive: 3523448 kB' 'Active(anon): 11073280 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587732 kB' 'Mapped: 186500 kB' 'Shmem: 10488788 kB' 'KReclaimable: 531276 kB' 'Slab: 1400476 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 869200 kB' 'KernelStack: 27328 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12646424 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.609 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.610 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:53.611 nr_hugepages=1024 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.611 resv_hugepages=0 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.611 surplus_hugepages=0 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.611 anon_hugepages=0 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104991100 kB' 'MemAvailable: 108479064 kB' 'Buffers: 2704 kB' 'Cached: 14483740 kB' 'SwapCached: 0 kB' 'Active: 11550056 kB' 'Inactive: 3523448 kB' 'Active(anon): 11075872 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591060 kB' 'Mapped: 186912 kB' 'Shmem: 10488812 kB' 'KReclaimable: 531276 kB' 'Slab: 1400476 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 869200 kB' 'KernelStack: 27504 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12649916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235652 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.611 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.612 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53961008 kB' 'MemUsed: 11698000 kB' 'SwapCached: 0 kB' 'Active: 5092132 kB' 'Inactive: 3298656 kB' 'Active(anon): 4939572 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3298656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8096712 kB' 'Mapped: 104756 kB' 'AnonPages: 297332 kB' 'Shmem: 4645496 kB' 'KernelStack: 15736 kB' 'PageTables: 5184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 398624 kB' 'Slab: 914104 kB' 'SReclaimable: 398624 kB' 'SUnreclaim: 515480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.874 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51038972 kB' 'MemUsed: 9640900 kB' 'SwapCached: 0 kB' 'Active: 6452256 kB' 'Inactive: 224792 kB' 'Active(anon): 6130632 kB' 'Inactive(anon): 0 kB' 'Active(file): 321624 kB' 'Inactive(file): 224792 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6389748 kB' 'Mapped: 81248 kB' 'AnonPages: 287404 kB' 'Shmem: 5843332 kB' 'KernelStack: 11688 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132652 kB' 'Slab: 486372 kB' 'SReclaimable: 132652 kB' 'SUnreclaim: 353720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.875 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.876 22:01:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.876 22:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.876 22:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.876 22:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.876 22:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.876 22:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:53.876 node0=512 expecting 512 00:03:53.876 22:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.876 22:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.876 22:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.876 22:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:53.876 node1=512 expecting 512 00:03:53.876 22:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:53.876 00:03:53.876 real 0m3.804s 00:03:53.876 user 0m1.554s 00:03:53.876 sys 0m2.293s 00:03:53.876 22:01:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.876 22:01:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:53.876 ************************************ 00:03:53.876 END TEST even_2G_alloc 00:03:53.876 ************************************ 00:03:53.876 22:01:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:53.876 22:01:19 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:53.876 22:01:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.876 22:01:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.876 22:01:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.876 ************************************ 00:03:53.876 START TEST odd_alloc 00:03:53.876 ************************************ 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.876 22:01:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.171 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:57.171 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:57.171 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:57.171 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:57.171 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:57.171 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:57.171 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:57.171 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:57.171 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:57.171 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:57.171 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:57.171 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:57.171 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:57.171 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:57.171 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:57.171 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:57.171 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:57.435 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:57.435 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.435 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.435 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.435 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.435 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.435 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.435 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.435 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.435 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.435 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.435 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104995296 kB' 'MemAvailable: 108483260 kB' 'Buffers: 2704 kB' 'Cached: 14483888 kB' 'SwapCached: 0 kB' 'Active: 11546360 kB' 'Inactive: 3523448 kB' 'Active(anon): 11072176 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585660 kB' 'Mapped: 186148 kB' 'Shmem: 10488960 kB' 'KReclaimable: 531276 kB' 'Slab: 1400196 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 868920 kB' 'KernelStack: 27216 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12641980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.436 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104996892 kB' 'MemAvailable: 108484856 kB' 'Buffers: 2704 kB' 'Cached: 14483892 kB' 'SwapCached: 0 kB' 'Active: 11545308 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071124 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585520 kB' 'Mapped: 186012 kB' 'Shmem: 10488964 kB' 'KReclaimable: 531276 kB' 'Slab: 1400148 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 868872 kB' 'KernelStack: 27264 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12642000 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.437 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.438 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104996640 kB' 'MemAvailable: 108484604 kB' 'Buffers: 2704 kB' 'Cached: 14483908 kB' 'SwapCached: 0 kB' 'Active: 11545368 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071184 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585564 kB' 'Mapped: 186012 kB' 'Shmem: 10488980 kB' 'KReclaimable: 531276 kB' 'Slab: 1400148 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 868872 kB' 'KernelStack: 27280 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12642020 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.439 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.440 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:57.441 nr_hugepages=1025 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.441 resv_hugepages=0 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.441 surplus_hugepages=0 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.441 anon_hugepages=0 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104996640 kB' 'MemAvailable: 108484604 kB' 'Buffers: 2704 kB' 'Cached: 14483908 kB' 'SwapCached: 0 kB' 'Active: 11545368 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071184 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585564 kB' 'Mapped: 186012 kB' 'Shmem: 10488980 kB' 'KReclaimable: 531276 kB' 'Slab: 1400148 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 868872 kB' 'KernelStack: 27280 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12642040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.441 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.708 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.709 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53958508 kB' 'MemUsed: 11700500 kB' 'SwapCached: 0 kB' 'Active: 5092788 kB' 'Inactive: 3298656 kB' 'Active(anon): 4940228 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3298656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8096840 kB' 'Mapped: 104764 kB' 'AnonPages: 297912 kB' 'Shmem: 4645624 kB' 'KernelStack: 15688 kB' 'PageTables: 4992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 398624 kB' 'Slab: 913592 kB' 'SReclaimable: 398624 kB' 'SUnreclaim: 514968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.710 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51038004 kB' 'MemUsed: 9641868 kB' 'SwapCached: 0 kB' 'Active: 6452596 kB' 'Inactive: 224792 kB' 'Active(anon): 6130972 kB' 'Inactive(anon): 0 kB' 'Active(file): 321624 kB' 'Inactive(file): 224792 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6389836 kB' 'Mapped: 81248 kB' 'AnonPages: 287648 kB' 'Shmem: 5843420 kB' 'KernelStack: 11592 kB' 'PageTables: 3572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132652 kB' 'Slab: 486556 kB' 'SReclaimable: 132652 kB' 'SUnreclaim: 353904 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.711 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:57.712 node0=512 expecting 513 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:57.712 node1=513 expecting 512 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:57.712 00:03:57.712 real 0m3.758s 00:03:57.712 user 0m1.441s 00:03:57.712 sys 0m2.362s 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.712 22:01:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.712 ************************************ 00:03:57.712 END TEST odd_alloc 00:03:57.712 ************************************ 00:03:57.712 22:01:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:57.712 22:01:22 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:57.712 22:01:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.712 22:01:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.712 22:01:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.712 ************************************ 00:03:57.712 START TEST custom_alloc 00:03:57.712 ************************************ 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.712 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.713 22:01:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.069 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:01.069 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:01.069 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:01.069 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:01.069 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:01.069 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:01.069 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:01.069 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:01.069 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:01.069 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:01.069 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:01.069 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:01.069 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:01.069 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:01.069 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:01.069 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:01.069 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103986412 kB' 'MemAvailable: 107474376 kB' 'Buffers: 2704 kB' 'Cached: 14484064 kB' 'SwapCached: 0 kB' 'Active: 11547392 kB' 'Inactive: 3523448 kB' 'Active(anon): 11073208 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586972 kB' 'Mapped: 186564 kB' 'Shmem: 10489136 kB' 'KReclaimable: 531276 kB' 'Slab: 1400104 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 868828 kB' 'KernelStack: 27312 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12642964 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.334 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.335 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103988300 kB' 'MemAvailable: 107476264 kB' 'Buffers: 2704 kB' 'Cached: 14484072 kB' 'SwapCached: 0 kB' 'Active: 11546060 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071876 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586120 kB' 'Mapped: 186028 kB' 'Shmem: 10489144 kB' 'KReclaimable: 531276 kB' 'Slab: 1400072 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 868796 kB' 'KernelStack: 27264 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12642984 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.336 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.337 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103988940 kB' 'MemAvailable: 107476904 kB' 'Buffers: 2704 kB' 'Cached: 14484088 kB' 'SwapCached: 0 kB' 'Active: 11546080 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071896 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586124 kB' 'Mapped: 186028 kB' 'Shmem: 10489160 kB' 'KReclaimable: 531276 kB' 'Slab: 1400072 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 868796 kB' 'KernelStack: 27264 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12643004 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.338 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:01.339 nr_hugepages=1536 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.339 resv_hugepages=0 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.339 surplus_hugepages=0 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.339 anon_hugepages=0 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.339 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103988860 kB' 'MemAvailable: 107476824 kB' 'Buffers: 2704 kB' 'Cached: 14484112 kB' 'SwapCached: 0 kB' 'Active: 11546200 kB' 'Inactive: 3523448 kB' 'Active(anon): 11072016 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586216 kB' 'Mapped: 186028 kB' 'Shmem: 10489184 kB' 'KReclaimable: 531276 kB' 'Slab: 1400072 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 868796 kB' 'KernelStack: 27264 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12643024 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.340 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53959436 kB' 'MemUsed: 11699572 kB' 'SwapCached: 0 kB' 'Active: 5092832 kB' 'Inactive: 3298656 kB' 'Active(anon): 4940272 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3298656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8096876 kB' 'Mapped: 104780 kB' 'AnonPages: 297784 kB' 'Shmem: 4645660 kB' 'KernelStack: 15640 kB' 'PageTables: 4844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 398624 kB' 'Slab: 913196 kB' 'SReclaimable: 398624 kB' 'SUnreclaim: 514572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.341 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.342 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.342 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.342 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.342 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.342 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.342 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.342 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.604 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 50029808 kB' 'MemUsed: 10650064 kB' 'SwapCached: 0 kB' 'Active: 6453304 kB' 'Inactive: 224792 kB' 'Active(anon): 6131680 kB' 'Inactive(anon): 0 kB' 'Active(file): 321624 kB' 'Inactive(file): 224792 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6389976 kB' 'Mapped: 81248 kB' 'AnonPages: 288332 kB' 'Shmem: 5843560 kB' 'KernelStack: 11576 kB' 'PageTables: 3568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132652 kB' 'Slab: 486876 kB' 'SReclaimable: 132652 kB' 'SUnreclaim: 354224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.605 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:01.606 node0=512 expecting 512 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:01.606 node1=1024 expecting 1024 00:04:01.606 22:01:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:01.606 00:04:01.606 real 0m3.793s 00:04:01.606 user 0m1.497s 00:04:01.607 sys 0m2.280s 00:04:01.607 22:01:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.607 22:01:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.607 ************************************ 00:04:01.607 END TEST custom_alloc 00:04:01.607 ************************************ 00:04:01.607 22:01:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:01.607 22:01:26 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:01.607 22:01:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.607 22:01:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.607 22:01:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.607 ************************************ 00:04:01.607 START TEST no_shrink_alloc 00:04:01.607 ************************************ 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.607 22:01:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.911 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:04.911 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:04.911 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:04.911 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:04.911 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:04.911 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:04.911 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:04.911 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:04.911 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:04.911 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:04.911 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:04.911 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:04.911 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:04.911 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:04.911 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:04.911 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:04.911 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105049128 kB' 'MemAvailable: 108537092 kB' 'Buffers: 2704 kB' 'Cached: 14484244 kB' 'SwapCached: 0 kB' 'Active: 11548840 kB' 'Inactive: 3523448 kB' 'Active(anon): 11074656 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589152 kB' 'Mapped: 186176 kB' 'Shmem: 10489316 kB' 'KReclaimable: 531276 kB' 'Slab: 1401140 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 869864 kB' 'KernelStack: 27312 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12643644 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.911 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105057840 kB' 'MemAvailable: 108545804 kB' 'Buffers: 2704 kB' 'Cached: 14484248 kB' 'SwapCached: 0 kB' 'Active: 11548580 kB' 'Inactive: 3523448 kB' 'Active(anon): 11074396 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589492 kB' 'Mapped: 186060 kB' 'Shmem: 10489320 kB' 'KReclaimable: 531276 kB' 'Slab: 1401116 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 869840 kB' 'KernelStack: 27280 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12643664 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.912 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.913 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105058168 kB' 'MemAvailable: 108546132 kB' 'Buffers: 2704 kB' 'Cached: 14484260 kB' 'SwapCached: 0 kB' 'Active: 11548800 kB' 'Inactive: 3523448 kB' 'Active(anon): 11074616 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589764 kB' 'Mapped: 186060 kB' 'Shmem: 10489332 kB' 'KReclaimable: 531276 kB' 'Slab: 1401120 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 869844 kB' 'KernelStack: 27312 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12644436 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.914 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.915 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.916 nr_hugepages=1024 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.916 resv_hugepages=0 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.916 surplus_hugepages=0 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.916 anon_hugepages=0 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105058256 kB' 'MemAvailable: 108546220 kB' 'Buffers: 2704 kB' 'Cached: 14484288 kB' 'SwapCached: 0 kB' 'Active: 11548576 kB' 'Inactive: 3523448 kB' 'Active(anon): 11074392 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589468 kB' 'Mapped: 186060 kB' 'Shmem: 10489360 kB' 'KReclaimable: 531276 kB' 'Slab: 1401120 kB' 'SReclaimable: 531276 kB' 'SUnreclaim: 869844 kB' 'KernelStack: 27248 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12643708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.916 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.917 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52915648 kB' 'MemUsed: 12743360 kB' 'SwapCached: 0 kB' 'Active: 5092508 kB' 'Inactive: 3298656 kB' 'Active(anon): 4939948 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3298656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8096888 kB' 'Mapped: 104812 kB' 'AnonPages: 298192 kB' 'Shmem: 4645672 kB' 'KernelStack: 15672 kB' 'PageTables: 4892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 398592 kB' 'Slab: 914168 kB' 'SReclaimable: 398592 kB' 'SUnreclaim: 515576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.919 node0=1024 expecting 1024 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.919 22:01:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:08.219 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:08.220 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:08.220 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:08.220 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:08.220 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:08.220 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:08.220 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:08.220 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:08.220 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:08.220 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:08.220 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:08.220 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:08.220 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:08.220 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:08.220 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:08.220 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:08.220 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:08.481 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105046836 kB' 'MemAvailable: 108534768 kB' 'Buffers: 2704 kB' 'Cached: 14484396 kB' 'SwapCached: 0 kB' 'Active: 11550920 kB' 'Inactive: 3523448 kB' 'Active(anon): 11076736 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590548 kB' 'Mapped: 185932 kB' 'Shmem: 10489468 kB' 'KReclaimable: 531244 kB' 'Slab: 1400968 kB' 'SReclaimable: 531244 kB' 'SUnreclaim: 869724 kB' 'KernelStack: 27280 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12644700 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.481 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.482 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105047936 kB' 'MemAvailable: 108535868 kB' 'Buffers: 2704 kB' 'Cached: 14484400 kB' 'SwapCached: 0 kB' 'Active: 11550564 kB' 'Inactive: 3523448 kB' 'Active(anon): 11076380 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590288 kB' 'Mapped: 186076 kB' 'Shmem: 10489472 kB' 'KReclaimable: 531244 kB' 'Slab: 1401040 kB' 'SReclaimable: 531244 kB' 'SUnreclaim: 869796 kB' 'KernelStack: 27280 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12644716 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.748 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105047936 kB' 'MemAvailable: 108535868 kB' 'Buffers: 2704 kB' 'Cached: 14484420 kB' 'SwapCached: 0 kB' 'Active: 11550544 kB' 'Inactive: 3523448 kB' 'Active(anon): 11076360 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590248 kB' 'Mapped: 186076 kB' 'Shmem: 10489492 kB' 'KReclaimable: 531244 kB' 'Slab: 1401040 kB' 'SReclaimable: 531244 kB' 'SUnreclaim: 869796 kB' 'KernelStack: 27264 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12644740 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.751 nr_hugepages=1024 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.751 resv_hugepages=0 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.751 surplus_hugepages=0 00:04:08.751 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.751 anon_hugepages=0 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105047936 kB' 'MemAvailable: 108535868 kB' 'Buffers: 2704 kB' 'Cached: 14484460 kB' 'SwapCached: 0 kB' 'Active: 11550268 kB' 'Inactive: 3523448 kB' 'Active(anon): 11076084 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589896 kB' 'Mapped: 186076 kB' 'Shmem: 10489532 kB' 'KReclaimable: 531244 kB' 'Slab: 1401040 kB' 'SReclaimable: 531244 kB' 'SUnreclaim: 869796 kB' 'KernelStack: 27264 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12644760 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4486516 kB' 'DirectMap2M: 33990656 kB' 'DirectMap1G: 97517568 kB' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.752 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.753 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52886060 kB' 'MemUsed: 12772948 kB' 'SwapCached: 0 kB' 'Active: 5093576 kB' 'Inactive: 3298656 kB' 'Active(anon): 4941016 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3298656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8096892 kB' 'Mapped: 104828 kB' 'AnonPages: 298512 kB' 'Shmem: 4645676 kB' 'KernelStack: 15688 kB' 'PageTables: 4948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 398592 kB' 'Slab: 914264 kB' 'SReclaimable: 398592 kB' 'SUnreclaim: 515672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.754 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.755 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.755 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.755 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.755 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.755 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.755 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.755 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.755 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.755 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.755 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.755 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.755 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.755 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:08.755 node0=1024 expecting 1024 00:04:08.755 22:01:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:08.755 00:04:08.755 real 0m7.168s 00:04:08.755 user 0m2.718s 00:04:08.755 sys 0m4.444s 00:04:08.755 22:01:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.755 22:01:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:08.755 ************************************ 00:04:08.755 END TEST no_shrink_alloc 00:04:08.755 ************************************ 00:04:08.755 22:01:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:08.755 22:01:33 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:08.755 22:01:33 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:08.755 22:01:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.755 22:01:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.755 22:01:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.755 22:01:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.755 22:01:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.755 22:01:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.755 22:01:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.755 22:01:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.755 22:01:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.755 22:01:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.755 22:01:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:08.755 22:01:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:08.755 00:04:08.755 real 0m26.540s 00:04:08.755 user 0m10.287s 00:04:08.755 sys 0m16.348s 00:04:08.755 22:01:33 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.755 22:01:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.755 ************************************ 00:04:08.755 END TEST hugepages 00:04:08.755 ************************************ 00:04:08.755 22:01:34 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:08.755 22:01:34 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:08.755 22:01:34 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.755 22:01:34 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.755 22:01:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:09.016 ************************************ 00:04:09.016 START TEST driver 00:04:09.016 ************************************ 00:04:09.016 22:01:34 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:09.016 * Looking for test storage... 00:04:09.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:09.016 22:01:34 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:09.016 22:01:34 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.016 22:01:34 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.302 22:01:39 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:14.303 22:01:39 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.303 22:01:39 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.303 22:01:39 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:14.303 ************************************ 00:04:14.303 START TEST guess_driver 00:04:14.303 ************************************ 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:14.303 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:14.303 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:14.303 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:14.303 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:14.303 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:14.303 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:14.303 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:14.303 Looking for driver=vfio-pci 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.303 22:01:39 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:17.602 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.602 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.602 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.602 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.602 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.602 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.602 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.602 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.602 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.603 22:01:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.864 22:01:43 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:17.864 22:01:43 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:17.864 22:01:43 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.865 22:01:43 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.215 00:04:23.215 real 0m8.636s 00:04:23.215 user 0m2.837s 00:04:23.215 sys 0m4.982s 00:04:23.215 22:01:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.215 22:01:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:23.215 ************************************ 00:04:23.215 END TEST guess_driver 00:04:23.215 ************************************ 00:04:23.215 22:01:47 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:23.215 00:04:23.215 real 0m13.747s 00:04:23.215 user 0m4.350s 00:04:23.215 sys 0m7.790s 00:04:23.215 22:01:47 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.215 22:01:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:23.215 ************************************ 00:04:23.215 END TEST driver 00:04:23.215 ************************************ 00:04:23.215 22:01:47 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:23.215 22:01:47 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:23.215 22:01:47 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.215 22:01:47 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.215 22:01:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:23.215 ************************************ 00:04:23.215 START TEST devices 00:04:23.215 ************************************ 00:04:23.215 22:01:47 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:23.215 * Looking for test storage... 00:04:23.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:23.215 22:01:47 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:23.215 22:01:47 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:23.215 22:01:47 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.215 22:01:47 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:27.419 22:01:51 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:27.419 22:01:51 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:27.419 22:01:51 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:27.419 22:01:51 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:27.419 22:01:51 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:27.419 22:01:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:27.419 22:01:51 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:27.419 22:01:51 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:27.419 22:01:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:27.419 22:01:51 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:27.419 No valid GPT data, bailing 00:04:27.419 22:01:51 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:27.419 22:01:51 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:27.419 22:01:51 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:27.419 22:01:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:27.419 22:01:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:27.419 22:01:51 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:27.419 22:01:51 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:27.419 22:01:51 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.419 22:01:51 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.419 22:01:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:27.419 ************************************ 00:04:27.419 START TEST nvme_mount 00:04:27.419 ************************************ 00:04:27.419 22:01:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:27.419 22:01:51 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:27.419 22:01:51 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:27.419 22:01:51 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.419 22:01:51 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.419 22:01:51 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:27.419 22:01:51 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:27.419 22:01:51 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:27.419 22:01:51 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:27.419 22:01:51 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:27.420 22:01:51 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:27.420 22:01:51 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:27.420 22:01:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:27.420 22:01:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.420 22:01:51 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.420 22:01:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:27.420 22:01:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.420 22:01:51 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:27.420 22:01:51 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:27.420 22:01:51 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:27.680 Creating new GPT entries in memory. 00:04:27.680 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:27.680 other utilities. 00:04:27.680 22:01:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:27.680 22:01:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.680 22:01:52 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.680 22:01:52 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.680 22:01:52 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:29.060 Creating new GPT entries in memory. 00:04:29.060 The operation has completed successfully. 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2536017 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.060 22:01:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:31.603 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.603 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.603 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.603 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.603 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.603 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.603 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.603 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.603 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.603 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.603 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.603 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.603 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.603 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.603 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.603 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.863 22:01:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.124 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.124 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:32.124 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.124 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.124 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.124 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:32.124 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.124 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.124 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.124 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:32.124 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:32.124 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:32.124 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:32.384 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:32.384 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:32.384 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:32.384 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.384 22:01:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:35.682 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.682 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.682 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.683 22:02:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.943 22:02:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.243 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.505 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.505 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:39.505 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:39.505 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:39.505 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.505 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.505 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.505 22:02:04 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.505 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.505 00:04:39.505 real 0m12.712s 00:04:39.505 user 0m3.656s 00:04:39.505 sys 0m6.845s 00:04:39.505 22:02:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.505 22:02:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:39.505 ************************************ 00:04:39.505 END TEST nvme_mount 00:04:39.505 ************************************ 00:04:39.505 22:02:04 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:39.505 22:02:04 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:39.505 22:02:04 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.505 22:02:04 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.505 22:02:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:39.505 ************************************ 00:04:39.505 START TEST dm_mount 00:04:39.505 ************************************ 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:39.505 22:02:04 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:40.890 Creating new GPT entries in memory. 00:04:40.890 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:40.890 other utilities. 00:04:40.890 22:02:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:40.890 22:02:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.890 22:02:05 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.890 22:02:05 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.890 22:02:05 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:41.830 Creating new GPT entries in memory. 00:04:41.830 The operation has completed successfully. 00:04:41.830 22:02:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.830 22:02:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.831 22:02:06 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.831 22:02:06 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.831 22:02:06 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:42.772 The operation has completed successfully. 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2540868 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.772 22:02:07 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.073 22:02:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.073 22:02:11 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.371 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.633 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.633 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:49.633 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:49.633 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:49.633 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.633 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:49.633 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:49.633 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.633 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:49.633 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:49.633 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:49.633 22:02:14 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:49.633 00:04:49.633 real 0m10.159s 00:04:49.633 user 0m2.555s 00:04:49.633 sys 0m4.593s 00:04:49.633 22:02:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.633 22:02:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:49.633 ************************************ 00:04:49.633 END TEST dm_mount 00:04:49.633 ************************************ 00:04:49.633 22:02:14 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:49.633 22:02:14 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:49.633 22:02:14 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:49.633 22:02:14 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.894 22:02:14 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.894 22:02:14 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:49.894 22:02:14 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:49.894 22:02:14 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:50.155 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:50.155 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:50.155 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:50.155 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:50.155 22:02:15 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:50.155 22:02:15 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.155 22:02:15 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:50.155 22:02:15 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.155 22:02:15 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:50.155 22:02:15 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.155 22:02:15 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:50.155 00:04:50.155 real 0m27.346s 00:04:50.155 user 0m7.820s 00:04:50.155 sys 0m14.146s 00:04:50.155 22:02:15 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.155 22:02:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:50.155 ************************************ 00:04:50.155 END TEST devices 00:04:50.155 ************************************ 00:04:50.155 22:02:15 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:50.155 00:04:50.155 real 1m33.432s 00:04:50.155 user 0m30.949s 00:04:50.155 sys 0m53.340s 00:04:50.155 22:02:15 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.155 22:02:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:50.155 ************************************ 00:04:50.155 END TEST setup.sh 00:04:50.155 ************************************ 00:04:50.155 22:02:15 -- common/autotest_common.sh@1142 -- # return 0 00:04:50.155 22:02:15 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:53.454 Hugepages 00:04:53.454 node hugesize free / total 00:04:53.454 node0 1048576kB 0 / 0 00:04:53.454 node0 2048kB 2048 / 2048 00:04:53.454 node1 1048576kB 0 / 0 00:04:53.454 node1 2048kB 0 / 0 00:04:53.454 00:04:53.454 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:53.454 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:53.454 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:53.454 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:53.454 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:53.454 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:53.454 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:53.454 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:53.454 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:53.454 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:53.454 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:53.454 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:53.454 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:53.454 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:53.454 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:53.454 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:53.454 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:53.454 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:53.454 22:02:18 -- spdk/autotest.sh@130 -- # uname -s 00:04:53.454 22:02:18 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:53.454 22:02:18 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:53.454 22:02:18 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.757 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:56.757 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:56.757 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:56.757 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:57.018 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:57.018 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:57.018 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:57.018 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:57.018 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:57.018 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:57.018 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:57.018 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:57.018 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:57.018 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:57.018 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:57.018 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:58.931 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:59.193 22:02:24 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:00.132 22:02:25 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:00.132 22:02:25 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:00.132 22:02:25 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:00.132 22:02:25 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:00.132 22:02:25 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:00.133 22:02:25 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:00.133 22:02:25 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.133 22:02:25 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:00.133 22:02:25 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:00.133 22:02:25 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:00.133 22:02:25 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:00.133 22:02:25 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.688 Waiting for block devices as requested 00:05:02.948 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:02.948 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:02.948 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:02.948 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:03.207 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:03.207 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:03.207 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:03.467 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:03.467 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:03.727 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:03.727 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:03.727 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:03.727 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:04.022 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:04.022 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:04.022 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:04.022 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:04.281 22:02:29 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:04.541 22:02:29 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:04.541 22:02:29 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:04.541 22:02:29 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:05:04.541 22:02:29 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:04.541 22:02:29 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:04.541 22:02:29 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:04.541 22:02:29 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:04.541 22:02:29 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:04.541 22:02:29 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:04.541 22:02:29 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:04.541 22:02:29 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:04.541 22:02:29 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:04.541 22:02:29 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:05:04.541 22:02:29 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:04.541 22:02:29 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:04.541 22:02:29 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:04.541 22:02:29 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:04.541 22:02:29 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:04.541 22:02:29 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:04.541 22:02:29 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:04.541 22:02:29 -- common/autotest_common.sh@1557 -- # continue 00:05:04.541 22:02:29 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:04.541 22:02:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.541 22:02:29 -- common/autotest_common.sh@10 -- # set +x 00:05:04.541 22:02:29 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:04.541 22:02:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.541 22:02:29 -- common/autotest_common.sh@10 -- # set +x 00:05:04.541 22:02:29 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:07.896 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:07.896 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:07.896 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:07.896 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:07.896 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:07.896 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:07.896 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:07.896 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:07.896 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:07.896 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:07.896 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:07.896 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:07.896 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:07.896 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:07.896 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:07.896 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:07.896 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:07.896 22:02:33 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:07.896 22:02:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.896 22:02:33 -- common/autotest_common.sh@10 -- # set +x 00:05:07.896 22:02:33 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:07.896 22:02:33 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:07.896 22:02:33 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:07.896 22:02:33 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:07.896 22:02:33 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:07.896 22:02:33 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:07.896 22:02:33 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:07.896 22:02:33 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:07.896 22:02:33 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.896 22:02:33 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:07.896 22:02:33 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:07.896 22:02:33 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:07.896 22:02:33 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:07.896 22:02:33 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:07.896 22:02:33 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:07.896 22:02:33 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:07.896 22:02:33 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:07.896 22:02:33 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:07.896 22:02:33 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:07.896 22:02:33 -- common/autotest_common.sh@1593 -- # return 0 00:05:07.896 22:02:33 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:08.157 22:02:33 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:08.157 22:02:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:08.157 22:02:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:08.157 22:02:33 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:08.157 22:02:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.157 22:02:33 -- common/autotest_common.sh@10 -- # set +x 00:05:08.157 22:02:33 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:08.157 22:02:33 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:08.157 22:02:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.157 22:02:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.157 22:02:33 -- common/autotest_common.sh@10 -- # set +x 00:05:08.157 ************************************ 00:05:08.157 START TEST env 00:05:08.157 ************************************ 00:05:08.157 22:02:33 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:08.157 * Looking for test storage... 00:05:08.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:08.157 22:02:33 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:08.157 22:02:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.157 22:02:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.157 22:02:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.157 ************************************ 00:05:08.157 START TEST env_memory 00:05:08.157 ************************************ 00:05:08.157 22:02:33 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:08.157 00:05:08.157 00:05:08.157 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.157 http://cunit.sourceforge.net/ 00:05:08.157 00:05:08.157 00:05:08.157 Suite: memory 00:05:08.157 Test: alloc and free memory map ...[2024-07-15 22:02:33.455726] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:08.157 passed 00:05:08.157 Test: mem map translation ...[2024-07-15 22:02:33.481266] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:08.157 [2024-07-15 22:02:33.481299] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:08.157 [2024-07-15 22:02:33.481347] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:08.157 [2024-07-15 22:02:33.481356] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:08.418 passed 00:05:08.418 Test: mem map registration ...[2024-07-15 22:02:33.536486] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:08.418 [2024-07-15 22:02:33.536521] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:08.418 passed 00:05:08.418 Test: mem map adjacent registrations ...passed 00:05:08.418 00:05:08.418 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.418 suites 1 1 n/a 0 0 00:05:08.418 tests 4 4 4 0 0 00:05:08.418 asserts 152 152 152 0 n/a 00:05:08.418 00:05:08.418 Elapsed time = 0.194 seconds 00:05:08.418 00:05:08.418 real 0m0.208s 00:05:08.418 user 0m0.195s 00:05:08.418 sys 0m0.013s 00:05:08.418 22:02:33 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.418 22:02:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:08.418 ************************************ 00:05:08.418 END TEST env_memory 00:05:08.418 ************************************ 00:05:08.418 22:02:33 env -- common/autotest_common.sh@1142 -- # return 0 00:05:08.418 22:02:33 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:08.418 22:02:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.418 22:02:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.418 22:02:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.418 ************************************ 00:05:08.418 START TEST env_vtophys 00:05:08.418 ************************************ 00:05:08.418 22:02:33 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:08.418 EAL: lib.eal log level changed from notice to debug 00:05:08.418 EAL: Detected lcore 0 as core 0 on socket 0 00:05:08.418 EAL: Detected lcore 1 as core 1 on socket 0 00:05:08.418 EAL: Detected lcore 2 as core 2 on socket 0 00:05:08.418 EAL: Detected lcore 3 as core 3 on socket 0 00:05:08.418 EAL: Detected lcore 4 as core 4 on socket 0 00:05:08.418 EAL: Detected lcore 5 as core 5 on socket 0 00:05:08.418 EAL: Detected lcore 6 as core 6 on socket 0 00:05:08.418 EAL: Detected lcore 7 as core 7 on socket 0 00:05:08.418 EAL: Detected lcore 8 as core 8 on socket 0 00:05:08.418 EAL: Detected lcore 9 as core 9 on socket 0 00:05:08.418 EAL: Detected lcore 10 as core 10 on socket 0 00:05:08.418 EAL: Detected lcore 11 as core 11 on socket 0 00:05:08.418 EAL: Detected lcore 12 as core 12 on socket 0 00:05:08.418 EAL: Detected lcore 13 as core 13 on socket 0 00:05:08.418 EAL: Detected lcore 14 as core 14 on socket 0 00:05:08.418 EAL: Detected lcore 15 as core 15 on socket 0 00:05:08.418 EAL: Detected lcore 16 as core 16 on socket 0 00:05:08.418 EAL: Detected lcore 17 as core 17 on socket 0 00:05:08.418 EAL: Detected lcore 18 as core 18 on socket 0 00:05:08.418 EAL: Detected lcore 19 as core 19 on socket 0 00:05:08.418 EAL: Detected lcore 20 as core 20 on socket 0 00:05:08.418 EAL: Detected lcore 21 as core 21 on socket 0 00:05:08.418 EAL: Detected lcore 22 as core 22 on socket 0 00:05:08.418 EAL: Detected lcore 23 as core 23 on socket 0 00:05:08.418 EAL: Detected lcore 24 as core 24 on socket 0 00:05:08.418 EAL: Detected lcore 25 as core 25 on socket 0 00:05:08.418 EAL: Detected lcore 26 as core 26 on socket 0 00:05:08.418 EAL: Detected lcore 27 as core 27 on socket 0 00:05:08.418 EAL: Detected lcore 28 as core 28 on socket 0 00:05:08.418 EAL: Detected lcore 29 as core 29 on socket 0 00:05:08.418 EAL: Detected lcore 30 as core 30 on socket 0 00:05:08.418 EAL: Detected lcore 31 as core 31 on socket 0 00:05:08.418 EAL: Detected lcore 32 as core 32 on socket 0 00:05:08.418 EAL: Detected lcore 33 as core 33 on socket 0 00:05:08.418 EAL: Detected lcore 34 as core 34 on socket 0 00:05:08.418 EAL: Detected lcore 35 as core 35 on socket 0 00:05:08.418 EAL: Detected lcore 36 as core 0 on socket 1 00:05:08.418 EAL: Detected lcore 37 as core 1 on socket 1 00:05:08.418 EAL: Detected lcore 38 as core 2 on socket 1 00:05:08.418 EAL: Detected lcore 39 as core 3 on socket 1 00:05:08.418 EAL: Detected lcore 40 as core 4 on socket 1 00:05:08.418 EAL: Detected lcore 41 as core 5 on socket 1 00:05:08.418 EAL: Detected lcore 42 as core 6 on socket 1 00:05:08.418 EAL: Detected lcore 43 as core 7 on socket 1 00:05:08.418 EAL: Detected lcore 44 as core 8 on socket 1 00:05:08.418 EAL: Detected lcore 45 as core 9 on socket 1 00:05:08.418 EAL: Detected lcore 46 as core 10 on socket 1 00:05:08.418 EAL: Detected lcore 47 as core 11 on socket 1 00:05:08.418 EAL: Detected lcore 48 as core 12 on socket 1 00:05:08.418 EAL: Detected lcore 49 as core 13 on socket 1 00:05:08.418 EAL: Detected lcore 50 as core 14 on socket 1 00:05:08.418 EAL: Detected lcore 51 as core 15 on socket 1 00:05:08.418 EAL: Detected lcore 52 as core 16 on socket 1 00:05:08.418 EAL: Detected lcore 53 as core 17 on socket 1 00:05:08.418 EAL: Detected lcore 54 as core 18 on socket 1 00:05:08.418 EAL: Detected lcore 55 as core 19 on socket 1 00:05:08.418 EAL: Detected lcore 56 as core 20 on socket 1 00:05:08.418 EAL: Detected lcore 57 as core 21 on socket 1 00:05:08.418 EAL: Detected lcore 58 as core 22 on socket 1 00:05:08.418 EAL: Detected lcore 59 as core 23 on socket 1 00:05:08.418 EAL: Detected lcore 60 as core 24 on socket 1 00:05:08.418 EAL: Detected lcore 61 as core 25 on socket 1 00:05:08.418 EAL: Detected lcore 62 as core 26 on socket 1 00:05:08.419 EAL: Detected lcore 63 as core 27 on socket 1 00:05:08.419 EAL: Detected lcore 64 as core 28 on socket 1 00:05:08.419 EAL: Detected lcore 65 as core 29 on socket 1 00:05:08.419 EAL: Detected lcore 66 as core 30 on socket 1 00:05:08.419 EAL: Detected lcore 67 as core 31 on socket 1 00:05:08.419 EAL: Detected lcore 68 as core 32 on socket 1 00:05:08.419 EAL: Detected lcore 69 as core 33 on socket 1 00:05:08.419 EAL: Detected lcore 70 as core 34 on socket 1 00:05:08.419 EAL: Detected lcore 71 as core 35 on socket 1 00:05:08.419 EAL: Detected lcore 72 as core 0 on socket 0 00:05:08.419 EAL: Detected lcore 73 as core 1 on socket 0 00:05:08.419 EAL: Detected lcore 74 as core 2 on socket 0 00:05:08.419 EAL: Detected lcore 75 as core 3 on socket 0 00:05:08.419 EAL: Detected lcore 76 as core 4 on socket 0 00:05:08.419 EAL: Detected lcore 77 as core 5 on socket 0 00:05:08.419 EAL: Detected lcore 78 as core 6 on socket 0 00:05:08.419 EAL: Detected lcore 79 as core 7 on socket 0 00:05:08.419 EAL: Detected lcore 80 as core 8 on socket 0 00:05:08.419 EAL: Detected lcore 81 as core 9 on socket 0 00:05:08.419 EAL: Detected lcore 82 as core 10 on socket 0 00:05:08.419 EAL: Detected lcore 83 as core 11 on socket 0 00:05:08.419 EAL: Detected lcore 84 as core 12 on socket 0 00:05:08.419 EAL: Detected lcore 85 as core 13 on socket 0 00:05:08.419 EAL: Detected lcore 86 as core 14 on socket 0 00:05:08.419 EAL: Detected lcore 87 as core 15 on socket 0 00:05:08.419 EAL: Detected lcore 88 as core 16 on socket 0 00:05:08.419 EAL: Detected lcore 89 as core 17 on socket 0 00:05:08.419 EAL: Detected lcore 90 as core 18 on socket 0 00:05:08.419 EAL: Detected lcore 91 as core 19 on socket 0 00:05:08.419 EAL: Detected lcore 92 as core 20 on socket 0 00:05:08.419 EAL: Detected lcore 93 as core 21 on socket 0 00:05:08.419 EAL: Detected lcore 94 as core 22 on socket 0 00:05:08.419 EAL: Detected lcore 95 as core 23 on socket 0 00:05:08.419 EAL: Detected lcore 96 as core 24 on socket 0 00:05:08.419 EAL: Detected lcore 97 as core 25 on socket 0 00:05:08.419 EAL: Detected lcore 98 as core 26 on socket 0 00:05:08.419 EAL: Detected lcore 99 as core 27 on socket 0 00:05:08.419 EAL: Detected lcore 100 as core 28 on socket 0 00:05:08.419 EAL: Detected lcore 101 as core 29 on socket 0 00:05:08.419 EAL: Detected lcore 102 as core 30 on socket 0 00:05:08.419 EAL: Detected lcore 103 as core 31 on socket 0 00:05:08.419 EAL: Detected lcore 104 as core 32 on socket 0 00:05:08.419 EAL: Detected lcore 105 as core 33 on socket 0 00:05:08.419 EAL: Detected lcore 106 as core 34 on socket 0 00:05:08.419 EAL: Detected lcore 107 as core 35 on socket 0 00:05:08.419 EAL: Detected lcore 108 as core 0 on socket 1 00:05:08.419 EAL: Detected lcore 109 as core 1 on socket 1 00:05:08.419 EAL: Detected lcore 110 as core 2 on socket 1 00:05:08.419 EAL: Detected lcore 111 as core 3 on socket 1 00:05:08.419 EAL: Detected lcore 112 as core 4 on socket 1 00:05:08.419 EAL: Detected lcore 113 as core 5 on socket 1 00:05:08.419 EAL: Detected lcore 114 as core 6 on socket 1 00:05:08.419 EAL: Detected lcore 115 as core 7 on socket 1 00:05:08.419 EAL: Detected lcore 116 as core 8 on socket 1 00:05:08.419 EAL: Detected lcore 117 as core 9 on socket 1 00:05:08.419 EAL: Detected lcore 118 as core 10 on socket 1 00:05:08.419 EAL: Detected lcore 119 as core 11 on socket 1 00:05:08.419 EAL: Detected lcore 120 as core 12 on socket 1 00:05:08.419 EAL: Detected lcore 121 as core 13 on socket 1 00:05:08.419 EAL: Detected lcore 122 as core 14 on socket 1 00:05:08.419 EAL: Detected lcore 123 as core 15 on socket 1 00:05:08.419 EAL: Detected lcore 124 as core 16 on socket 1 00:05:08.419 EAL: Detected lcore 125 as core 17 on socket 1 00:05:08.419 EAL: Detected lcore 126 as core 18 on socket 1 00:05:08.419 EAL: Detected lcore 127 as core 19 on socket 1 00:05:08.419 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:08.419 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:08.419 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:08.419 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:08.419 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:08.419 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:08.419 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:08.419 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:08.419 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:08.419 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:08.419 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:08.419 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:08.419 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:08.419 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:08.419 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:08.419 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:08.419 EAL: Maximum logical cores by configuration: 128 00:05:08.419 EAL: Detected CPU lcores: 128 00:05:08.419 EAL: Detected NUMA nodes: 2 00:05:08.419 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:08.419 EAL: Detected shared linkage of DPDK 00:05:08.419 EAL: No shared files mode enabled, IPC will be disabled 00:05:08.419 EAL: Bus pci wants IOVA as 'DC' 00:05:08.419 EAL: Buses did not request a specific IOVA mode. 00:05:08.419 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:08.419 EAL: Selected IOVA mode 'VA' 00:05:08.419 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.419 EAL: Probing VFIO support... 00:05:08.419 EAL: IOMMU type 1 (Type 1) is supported 00:05:08.419 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:08.419 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:08.419 EAL: VFIO support initialized 00:05:08.419 EAL: Ask a virtual area of 0x2e000 bytes 00:05:08.419 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:08.419 EAL: Setting up physically contiguous memory... 00:05:08.419 EAL: Setting maximum number of open files to 524288 00:05:08.419 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:08.419 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:08.419 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:08.419 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.419 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:08.419 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.419 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.419 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:08.419 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:08.419 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.419 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:08.419 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.419 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.419 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:08.419 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:08.419 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.419 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:08.419 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.419 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.419 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:08.419 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:08.419 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.419 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:08.419 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.419 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.419 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:08.419 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:08.419 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:08.419 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.419 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:08.419 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.419 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.419 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:08.419 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:08.419 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.419 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:08.419 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.419 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.419 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:08.419 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:08.419 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.419 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:08.419 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.419 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.419 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:08.419 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:08.419 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.419 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:08.419 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.419 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.419 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:08.419 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:08.419 EAL: Hugepages will be freed exactly as allocated. 00:05:08.419 EAL: No shared files mode enabled, IPC is disabled 00:05:08.419 EAL: No shared files mode enabled, IPC is disabled 00:05:08.419 EAL: TSC frequency is ~2400000 KHz 00:05:08.419 EAL: Main lcore 0 is ready (tid=7f26a9d8ca00;cpuset=[0]) 00:05:08.419 EAL: Trying to obtain current memory policy. 00:05:08.419 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.419 EAL: Restoring previous memory policy: 0 00:05:08.419 EAL: request: mp_malloc_sync 00:05:08.419 EAL: No shared files mode enabled, IPC is disabled 00:05:08.419 EAL: Heap on socket 0 was expanded by 2MB 00:05:08.419 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:08.680 EAL: Mem event callback 'spdk:(nil)' registered 00:05:08.680 00:05:08.680 00:05:08.680 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.680 http://cunit.sourceforge.net/ 00:05:08.680 00:05:08.680 00:05:08.680 Suite: components_suite 00:05:08.680 Test: vtophys_malloc_test ...passed 00:05:08.680 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:08.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.680 EAL: Restoring previous memory policy: 4 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was expanded by 4MB 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was shrunk by 4MB 00:05:08.680 EAL: Trying to obtain current memory policy. 00:05:08.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.680 EAL: Restoring previous memory policy: 4 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was expanded by 6MB 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was shrunk by 6MB 00:05:08.680 EAL: Trying to obtain current memory policy. 00:05:08.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.680 EAL: Restoring previous memory policy: 4 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was expanded by 10MB 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was shrunk by 10MB 00:05:08.680 EAL: Trying to obtain current memory policy. 00:05:08.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.680 EAL: Restoring previous memory policy: 4 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was expanded by 18MB 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was shrunk by 18MB 00:05:08.680 EAL: Trying to obtain current memory policy. 00:05:08.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.680 EAL: Restoring previous memory policy: 4 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was expanded by 34MB 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was shrunk by 34MB 00:05:08.680 EAL: Trying to obtain current memory policy. 00:05:08.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.680 EAL: Restoring previous memory policy: 4 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was expanded by 66MB 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was shrunk by 66MB 00:05:08.680 EAL: Trying to obtain current memory policy. 00:05:08.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.680 EAL: Restoring previous memory policy: 4 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was expanded by 130MB 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was shrunk by 130MB 00:05:08.680 EAL: Trying to obtain current memory policy. 00:05:08.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.680 EAL: Restoring previous memory policy: 4 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was expanded by 258MB 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was shrunk by 258MB 00:05:08.680 EAL: Trying to obtain current memory policy. 00:05:08.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.680 EAL: Restoring previous memory policy: 4 00:05:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.680 EAL: request: mp_malloc_sync 00:05:08.680 EAL: No shared files mode enabled, IPC is disabled 00:05:08.680 EAL: Heap on socket 0 was expanded by 514MB 00:05:08.940 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.940 EAL: request: mp_malloc_sync 00:05:08.940 EAL: No shared files mode enabled, IPC is disabled 00:05:08.940 EAL: Heap on socket 0 was shrunk by 514MB 00:05:08.940 EAL: Trying to obtain current memory policy. 00:05:08.940 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.940 EAL: Restoring previous memory policy: 4 00:05:08.940 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.940 EAL: request: mp_malloc_sync 00:05:08.940 EAL: No shared files mode enabled, IPC is disabled 00:05:08.940 EAL: Heap on socket 0 was expanded by 1026MB 00:05:09.201 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.201 EAL: request: mp_malloc_sync 00:05:09.201 EAL: No shared files mode enabled, IPC is disabled 00:05:09.201 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:09.201 passed 00:05:09.201 00:05:09.201 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.202 suites 1 1 n/a 0 0 00:05:09.202 tests 2 2 2 0 0 00:05:09.202 asserts 497 497 497 0 n/a 00:05:09.202 00:05:09.202 Elapsed time = 0.642 seconds 00:05:09.202 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.202 EAL: request: mp_malloc_sync 00:05:09.202 EAL: No shared files mode enabled, IPC is disabled 00:05:09.202 EAL: Heap on socket 0 was shrunk by 2MB 00:05:09.202 EAL: No shared files mode enabled, IPC is disabled 00:05:09.202 EAL: No shared files mode enabled, IPC is disabled 00:05:09.202 EAL: No shared files mode enabled, IPC is disabled 00:05:09.202 00:05:09.202 real 0m0.756s 00:05:09.202 user 0m0.414s 00:05:09.202 sys 0m0.318s 00:05:09.202 22:02:34 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.202 22:02:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:09.202 ************************************ 00:05:09.202 END TEST env_vtophys 00:05:09.202 ************************************ 00:05:09.202 22:02:34 env -- common/autotest_common.sh@1142 -- # return 0 00:05:09.202 22:02:34 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:09.202 22:02:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.202 22:02:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.202 22:02:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.202 ************************************ 00:05:09.202 START TEST env_pci 00:05:09.202 ************************************ 00:05:09.202 22:02:34 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:09.462 00:05:09.462 00:05:09.463 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.463 http://cunit.sourceforge.net/ 00:05:09.463 00:05:09.463 00:05:09.463 Suite: pci 00:05:09.463 Test: pci_hook ...[2024-07-15 22:02:34.528898] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2551717 has claimed it 00:05:09.463 EAL: Cannot find device (10000:00:01.0) 00:05:09.463 EAL: Failed to attach device on primary process 00:05:09.463 passed 00:05:09.463 00:05:09.463 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.463 suites 1 1 n/a 0 0 00:05:09.463 tests 1 1 1 0 0 00:05:09.463 asserts 25 25 25 0 n/a 00:05:09.463 00:05:09.463 Elapsed time = 0.038 seconds 00:05:09.463 00:05:09.463 real 0m0.058s 00:05:09.463 user 0m0.017s 00:05:09.463 sys 0m0.041s 00:05:09.463 22:02:34 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.463 22:02:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:09.463 ************************************ 00:05:09.463 END TEST env_pci 00:05:09.463 ************************************ 00:05:09.463 22:02:34 env -- common/autotest_common.sh@1142 -- # return 0 00:05:09.463 22:02:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:09.463 22:02:34 env -- env/env.sh@15 -- # uname 00:05:09.463 22:02:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:09.463 22:02:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:09.463 22:02:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.463 22:02:34 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:09.463 22:02:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.463 22:02:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.463 ************************************ 00:05:09.463 START TEST env_dpdk_post_init 00:05:09.463 ************************************ 00:05:09.463 22:02:34 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.463 EAL: Detected CPU lcores: 128 00:05:09.463 EAL: Detected NUMA nodes: 2 00:05:09.463 EAL: Detected shared linkage of DPDK 00:05:09.463 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:09.463 EAL: Selected IOVA mode 'VA' 00:05:09.463 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.463 EAL: VFIO support initialized 00:05:09.463 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:09.463 EAL: Using IOMMU type 1 (Type 1) 00:05:09.722 EAL: Ignore mapping IO port bar(1) 00:05:09.722 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:09.982 EAL: Ignore mapping IO port bar(1) 00:05:09.982 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:10.241 EAL: Ignore mapping IO port bar(1) 00:05:10.241 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:10.241 EAL: Ignore mapping IO port bar(1) 00:05:10.503 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:10.503 EAL: Ignore mapping IO port bar(1) 00:05:10.763 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:10.763 EAL: Ignore mapping IO port bar(1) 00:05:11.023 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:11.023 EAL: Ignore mapping IO port bar(1) 00:05:11.023 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:11.283 EAL: Ignore mapping IO port bar(1) 00:05:11.283 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:11.543 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:11.803 EAL: Ignore mapping IO port bar(1) 00:05:11.803 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:11.803 EAL: Ignore mapping IO port bar(1) 00:05:12.063 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:12.063 EAL: Ignore mapping IO port bar(1) 00:05:12.324 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:12.324 EAL: Ignore mapping IO port bar(1) 00:05:12.584 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:12.585 EAL: Ignore mapping IO port bar(1) 00:05:12.585 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:12.845 EAL: Ignore mapping IO port bar(1) 00:05:12.845 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:13.104 EAL: Ignore mapping IO port bar(1) 00:05:13.104 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:13.363 EAL: Ignore mapping IO port bar(1) 00:05:13.363 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:13.363 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:13.363 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:13.623 Starting DPDK initialization... 00:05:13.623 Starting SPDK post initialization... 00:05:13.623 SPDK NVMe probe 00:05:13.623 Attaching to 0000:65:00.0 00:05:13.623 Attached to 0000:65:00.0 00:05:13.623 Cleaning up... 00:05:15.530 00:05:15.530 real 0m5.709s 00:05:15.530 user 0m0.175s 00:05:15.530 sys 0m0.083s 00:05:15.530 22:02:40 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.530 22:02:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.530 ************************************ 00:05:15.530 END TEST env_dpdk_post_init 00:05:15.530 ************************************ 00:05:15.530 22:02:40 env -- common/autotest_common.sh@1142 -- # return 0 00:05:15.530 22:02:40 env -- env/env.sh@26 -- # uname 00:05:15.530 22:02:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:15.530 22:02:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.530 22:02:40 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.530 22:02:40 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.530 22:02:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.530 ************************************ 00:05:15.530 START TEST env_mem_callbacks 00:05:15.530 ************************************ 00:05:15.530 22:02:40 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.530 EAL: Detected CPU lcores: 128 00:05:15.530 EAL: Detected NUMA nodes: 2 00:05:15.530 EAL: Detected shared linkage of DPDK 00:05:15.530 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:15.530 EAL: Selected IOVA mode 'VA' 00:05:15.530 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.530 EAL: VFIO support initialized 00:05:15.530 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:15.530 00:05:15.530 00:05:15.530 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.530 http://cunit.sourceforge.net/ 00:05:15.530 00:05:15.530 00:05:15.530 Suite: memory 00:05:15.530 Test: test ... 00:05:15.530 register 0x200000200000 2097152 00:05:15.530 malloc 3145728 00:05:15.530 register 0x200000400000 4194304 00:05:15.530 buf 0x200000500000 len 3145728 PASSED 00:05:15.530 malloc 64 00:05:15.530 buf 0x2000004fff40 len 64 PASSED 00:05:15.530 malloc 4194304 00:05:15.530 register 0x200000800000 6291456 00:05:15.530 buf 0x200000a00000 len 4194304 PASSED 00:05:15.530 free 0x200000500000 3145728 00:05:15.530 free 0x2000004fff40 64 00:05:15.530 unregister 0x200000400000 4194304 PASSED 00:05:15.530 free 0x200000a00000 4194304 00:05:15.530 unregister 0x200000800000 6291456 PASSED 00:05:15.530 malloc 8388608 00:05:15.530 register 0x200000400000 10485760 00:05:15.530 buf 0x200000600000 len 8388608 PASSED 00:05:15.530 free 0x200000600000 8388608 00:05:15.530 unregister 0x200000400000 10485760 PASSED 00:05:15.530 passed 00:05:15.530 00:05:15.530 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.530 suites 1 1 n/a 0 0 00:05:15.530 tests 1 1 1 0 0 00:05:15.530 asserts 15 15 15 0 n/a 00:05:15.530 00:05:15.530 Elapsed time = 0.006 seconds 00:05:15.530 00:05:15.530 real 0m0.062s 00:05:15.530 user 0m0.021s 00:05:15.530 sys 0m0.041s 00:05:15.530 22:02:40 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.530 22:02:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:15.530 ************************************ 00:05:15.530 END TEST env_mem_callbacks 00:05:15.530 ************************************ 00:05:15.530 22:02:40 env -- common/autotest_common.sh@1142 -- # return 0 00:05:15.530 00:05:15.530 real 0m7.275s 00:05:15.530 user 0m1.023s 00:05:15.530 sys 0m0.804s 00:05:15.530 22:02:40 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.530 22:02:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.530 ************************************ 00:05:15.530 END TEST env 00:05:15.530 ************************************ 00:05:15.530 22:02:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:15.530 22:02:40 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:15.530 22:02:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.530 22:02:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.530 22:02:40 -- common/autotest_common.sh@10 -- # set +x 00:05:15.530 ************************************ 00:05:15.530 START TEST rpc 00:05:15.530 ************************************ 00:05:15.530 22:02:40 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:15.530 * Looking for test storage... 00:05:15.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:15.530 22:02:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2553048 00:05:15.530 22:02:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.530 22:02:40 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:15.530 22:02:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2553048 00:05:15.530 22:02:40 rpc -- common/autotest_common.sh@829 -- # '[' -z 2553048 ']' 00:05:15.530 22:02:40 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.530 22:02:40 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.530 22:02:40 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.530 22:02:40 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.530 22:02:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.530 [2024-07-15 22:02:40.782030] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:05:15.530 [2024-07-15 22:02:40.782105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2553048 ] 00:05:15.530 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.530 [2024-07-15 22:02:40.845554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.790 [2024-07-15 22:02:40.921935] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:15.790 [2024-07-15 22:02:40.921973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2553048' to capture a snapshot of events at runtime. 00:05:15.790 [2024-07-15 22:02:40.921981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:15.790 [2024-07-15 22:02:40.921988] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:15.790 [2024-07-15 22:02:40.921993] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2553048 for offline analysis/debug. 00:05:15.790 [2024-07-15 22:02:40.922013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.363 22:02:41 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.364 22:02:41 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:16.364 22:02:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.364 22:02:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.364 22:02:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:16.364 22:02:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:16.364 22:02:41 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.364 22:02:41 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.364 22:02:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.364 ************************************ 00:05:16.364 START TEST rpc_integrity 00:05:16.364 ************************************ 00:05:16.364 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:16.364 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:16.364 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.364 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.364 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.364 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:16.364 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:16.364 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:16.364 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:16.364 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.364 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.364 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.364 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:16.364 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:16.364 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.364 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.364 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.364 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:16.364 { 00:05:16.364 "name": "Malloc0", 00:05:16.364 "aliases": [ 00:05:16.364 "704ec816-fd78-47ab-8c35-78a576e1cdea" 00:05:16.364 ], 00:05:16.364 "product_name": "Malloc disk", 00:05:16.364 "block_size": 512, 00:05:16.364 "num_blocks": 16384, 00:05:16.364 "uuid": "704ec816-fd78-47ab-8c35-78a576e1cdea", 00:05:16.364 "assigned_rate_limits": { 00:05:16.364 "rw_ios_per_sec": 0, 00:05:16.364 "rw_mbytes_per_sec": 0, 00:05:16.364 "r_mbytes_per_sec": 0, 00:05:16.364 "w_mbytes_per_sec": 0 00:05:16.364 }, 00:05:16.364 "claimed": false, 00:05:16.364 "zoned": false, 00:05:16.364 "supported_io_types": { 00:05:16.364 "read": true, 00:05:16.364 "write": true, 00:05:16.364 "unmap": true, 00:05:16.364 "flush": true, 00:05:16.364 "reset": true, 00:05:16.364 "nvme_admin": false, 00:05:16.364 "nvme_io": false, 00:05:16.364 "nvme_io_md": false, 00:05:16.364 "write_zeroes": true, 00:05:16.364 "zcopy": true, 00:05:16.364 "get_zone_info": false, 00:05:16.364 "zone_management": false, 00:05:16.364 "zone_append": false, 00:05:16.364 "compare": false, 00:05:16.364 "compare_and_write": false, 00:05:16.364 "abort": true, 00:05:16.364 "seek_hole": false, 00:05:16.364 "seek_data": false, 00:05:16.364 "copy": true, 00:05:16.364 "nvme_iov_md": false 00:05:16.364 }, 00:05:16.364 "memory_domains": [ 00:05:16.364 { 00:05:16.364 "dma_device_id": "system", 00:05:16.364 "dma_device_type": 1 00:05:16.364 }, 00:05:16.364 { 00:05:16.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.364 "dma_device_type": 2 00:05:16.364 } 00:05:16.364 ], 00:05:16.364 "driver_specific": {} 00:05:16.364 } 00:05:16.364 ]' 00:05:16.364 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:16.364 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:16.364 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:16.364 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.364 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.364 [2024-07-15 22:02:41.683287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:16.364 [2024-07-15 22:02:41.683318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:16.364 [2024-07-15 22:02:41.683330] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2278d80 00:05:16.364 [2024-07-15 22:02:41.683337] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:16.364 [2024-07-15 22:02:41.684695] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:16.364 [2024-07-15 22:02:41.684716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:16.364 Passthru0 00:05:16.625 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.625 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:16.625 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.625 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.625 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.625 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:16.625 { 00:05:16.625 "name": "Malloc0", 00:05:16.625 "aliases": [ 00:05:16.625 "704ec816-fd78-47ab-8c35-78a576e1cdea" 00:05:16.625 ], 00:05:16.625 "product_name": "Malloc disk", 00:05:16.625 "block_size": 512, 00:05:16.625 "num_blocks": 16384, 00:05:16.625 "uuid": "704ec816-fd78-47ab-8c35-78a576e1cdea", 00:05:16.625 "assigned_rate_limits": { 00:05:16.625 "rw_ios_per_sec": 0, 00:05:16.625 "rw_mbytes_per_sec": 0, 00:05:16.625 "r_mbytes_per_sec": 0, 00:05:16.625 "w_mbytes_per_sec": 0 00:05:16.625 }, 00:05:16.625 "claimed": true, 00:05:16.625 "claim_type": "exclusive_write", 00:05:16.625 "zoned": false, 00:05:16.625 "supported_io_types": { 00:05:16.625 "read": true, 00:05:16.625 "write": true, 00:05:16.625 "unmap": true, 00:05:16.625 "flush": true, 00:05:16.625 "reset": true, 00:05:16.625 "nvme_admin": false, 00:05:16.625 "nvme_io": false, 00:05:16.625 "nvme_io_md": false, 00:05:16.625 "write_zeroes": true, 00:05:16.625 "zcopy": true, 00:05:16.625 "get_zone_info": false, 00:05:16.625 "zone_management": false, 00:05:16.625 "zone_append": false, 00:05:16.625 "compare": false, 00:05:16.625 "compare_and_write": false, 00:05:16.625 "abort": true, 00:05:16.625 "seek_hole": false, 00:05:16.625 "seek_data": false, 00:05:16.625 "copy": true, 00:05:16.625 "nvme_iov_md": false 00:05:16.625 }, 00:05:16.625 "memory_domains": [ 00:05:16.625 { 00:05:16.625 "dma_device_id": "system", 00:05:16.625 "dma_device_type": 1 00:05:16.625 }, 00:05:16.625 { 00:05:16.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.625 "dma_device_type": 2 00:05:16.625 } 00:05:16.625 ], 00:05:16.625 "driver_specific": {} 00:05:16.625 }, 00:05:16.625 { 00:05:16.625 "name": "Passthru0", 00:05:16.625 "aliases": [ 00:05:16.625 "997a9d9a-be06-5730-a15e-e298914c595b" 00:05:16.625 ], 00:05:16.625 "product_name": "passthru", 00:05:16.625 "block_size": 512, 00:05:16.625 "num_blocks": 16384, 00:05:16.625 "uuid": "997a9d9a-be06-5730-a15e-e298914c595b", 00:05:16.625 "assigned_rate_limits": { 00:05:16.625 "rw_ios_per_sec": 0, 00:05:16.625 "rw_mbytes_per_sec": 0, 00:05:16.625 "r_mbytes_per_sec": 0, 00:05:16.625 "w_mbytes_per_sec": 0 00:05:16.625 }, 00:05:16.625 "claimed": false, 00:05:16.625 "zoned": false, 00:05:16.625 "supported_io_types": { 00:05:16.625 "read": true, 00:05:16.625 "write": true, 00:05:16.625 "unmap": true, 00:05:16.625 "flush": true, 00:05:16.625 "reset": true, 00:05:16.625 "nvme_admin": false, 00:05:16.625 "nvme_io": false, 00:05:16.625 "nvme_io_md": false, 00:05:16.625 "write_zeroes": true, 00:05:16.625 "zcopy": true, 00:05:16.625 "get_zone_info": false, 00:05:16.625 "zone_management": false, 00:05:16.625 "zone_append": false, 00:05:16.625 "compare": false, 00:05:16.625 "compare_and_write": false, 00:05:16.625 "abort": true, 00:05:16.625 "seek_hole": false, 00:05:16.625 "seek_data": false, 00:05:16.625 "copy": true, 00:05:16.625 "nvme_iov_md": false 00:05:16.625 }, 00:05:16.625 "memory_domains": [ 00:05:16.625 { 00:05:16.625 "dma_device_id": "system", 00:05:16.625 "dma_device_type": 1 00:05:16.625 }, 00:05:16.625 { 00:05:16.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.625 "dma_device_type": 2 00:05:16.625 } 00:05:16.625 ], 00:05:16.625 "driver_specific": { 00:05:16.625 "passthru": { 00:05:16.625 "name": "Passthru0", 00:05:16.625 "base_bdev_name": "Malloc0" 00:05:16.625 } 00:05:16.625 } 00:05:16.625 } 00:05:16.625 ]' 00:05:16.625 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:16.625 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:16.625 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:16.625 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.625 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.625 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.625 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:16.625 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.625 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.625 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.625 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:16.625 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.625 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.625 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.625 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:16.625 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:16.625 22:02:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:16.625 00:05:16.625 real 0m0.285s 00:05:16.625 user 0m0.182s 00:05:16.625 sys 0m0.037s 00:05:16.625 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.625 22:02:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.625 ************************************ 00:05:16.625 END TEST rpc_integrity 00:05:16.625 ************************************ 00:05:16.625 22:02:41 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.625 22:02:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:16.625 22:02:41 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.625 22:02:41 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.625 22:02:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.625 ************************************ 00:05:16.625 START TEST rpc_plugins 00:05:16.625 ************************************ 00:05:16.625 22:02:41 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:16.625 22:02:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:16.625 22:02:41 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.625 22:02:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.625 22:02:41 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.625 22:02:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:16.625 22:02:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:16.625 22:02:41 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.625 22:02:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.625 22:02:41 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.625 22:02:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:16.625 { 00:05:16.625 "name": "Malloc1", 00:05:16.625 "aliases": [ 00:05:16.625 "6fb386c2-dbb3-434d-add3-71005e82c581" 00:05:16.625 ], 00:05:16.625 "product_name": "Malloc disk", 00:05:16.625 "block_size": 4096, 00:05:16.625 "num_blocks": 256, 00:05:16.625 "uuid": "6fb386c2-dbb3-434d-add3-71005e82c581", 00:05:16.625 "assigned_rate_limits": { 00:05:16.625 "rw_ios_per_sec": 0, 00:05:16.625 "rw_mbytes_per_sec": 0, 00:05:16.625 "r_mbytes_per_sec": 0, 00:05:16.625 "w_mbytes_per_sec": 0 00:05:16.625 }, 00:05:16.625 "claimed": false, 00:05:16.625 "zoned": false, 00:05:16.625 "supported_io_types": { 00:05:16.625 "read": true, 00:05:16.625 "write": true, 00:05:16.625 "unmap": true, 00:05:16.625 "flush": true, 00:05:16.625 "reset": true, 00:05:16.625 "nvme_admin": false, 00:05:16.625 "nvme_io": false, 00:05:16.625 "nvme_io_md": false, 00:05:16.625 "write_zeroes": true, 00:05:16.625 "zcopy": true, 00:05:16.625 "get_zone_info": false, 00:05:16.625 "zone_management": false, 00:05:16.625 "zone_append": false, 00:05:16.625 "compare": false, 00:05:16.625 "compare_and_write": false, 00:05:16.625 "abort": true, 00:05:16.625 "seek_hole": false, 00:05:16.625 "seek_data": false, 00:05:16.625 "copy": true, 00:05:16.625 "nvme_iov_md": false 00:05:16.625 }, 00:05:16.625 "memory_domains": [ 00:05:16.625 { 00:05:16.625 "dma_device_id": "system", 00:05:16.625 "dma_device_type": 1 00:05:16.625 }, 00:05:16.625 { 00:05:16.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.625 "dma_device_type": 2 00:05:16.625 } 00:05:16.625 ], 00:05:16.625 "driver_specific": {} 00:05:16.625 } 00:05:16.625 ]' 00:05:16.625 22:02:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:16.887 22:02:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:16.887 22:02:41 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:16.887 22:02:41 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.887 22:02:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.887 22:02:41 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.887 22:02:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:16.887 22:02:41 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.887 22:02:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.887 22:02:41 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.887 22:02:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:16.887 22:02:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:16.887 22:02:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:16.887 00:05:16.887 real 0m0.116s 00:05:16.887 user 0m0.072s 00:05:16.887 sys 0m0.014s 00:05:16.887 22:02:42 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.887 22:02:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.887 ************************************ 00:05:16.887 END TEST rpc_plugins 00:05:16.887 ************************************ 00:05:16.887 22:02:42 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.887 22:02:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:16.887 22:02:42 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.887 22:02:42 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.887 22:02:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.887 ************************************ 00:05:16.887 START TEST rpc_trace_cmd_test 00:05:16.887 ************************************ 00:05:16.887 22:02:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:16.887 22:02:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:16.887 22:02:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:16.887 22:02:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.887 22:02:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:16.887 22:02:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.887 22:02:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:16.887 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2553048", 00:05:16.887 "tpoint_group_mask": "0x8", 00:05:16.887 "iscsi_conn": { 00:05:16.887 "mask": "0x2", 00:05:16.887 "tpoint_mask": "0x0" 00:05:16.887 }, 00:05:16.887 "scsi": { 00:05:16.887 "mask": "0x4", 00:05:16.887 "tpoint_mask": "0x0" 00:05:16.887 }, 00:05:16.887 "bdev": { 00:05:16.887 "mask": "0x8", 00:05:16.887 "tpoint_mask": "0xffffffffffffffff" 00:05:16.887 }, 00:05:16.887 "nvmf_rdma": { 00:05:16.887 "mask": "0x10", 00:05:16.887 "tpoint_mask": "0x0" 00:05:16.887 }, 00:05:16.887 "nvmf_tcp": { 00:05:16.887 "mask": "0x20", 00:05:16.887 "tpoint_mask": "0x0" 00:05:16.887 }, 00:05:16.887 "ftl": { 00:05:16.887 "mask": "0x40", 00:05:16.887 "tpoint_mask": "0x0" 00:05:16.887 }, 00:05:16.887 "blobfs": { 00:05:16.887 "mask": "0x80", 00:05:16.887 "tpoint_mask": "0x0" 00:05:16.887 }, 00:05:16.887 "dsa": { 00:05:16.887 "mask": "0x200", 00:05:16.887 "tpoint_mask": "0x0" 00:05:16.887 }, 00:05:16.887 "thread": { 00:05:16.887 "mask": "0x400", 00:05:16.887 "tpoint_mask": "0x0" 00:05:16.887 }, 00:05:16.887 "nvme_pcie": { 00:05:16.887 "mask": "0x800", 00:05:16.887 "tpoint_mask": "0x0" 00:05:16.887 }, 00:05:16.887 "iaa": { 00:05:16.887 "mask": "0x1000", 00:05:16.887 "tpoint_mask": "0x0" 00:05:16.887 }, 00:05:16.887 "nvme_tcp": { 00:05:16.887 "mask": "0x2000", 00:05:16.887 "tpoint_mask": "0x0" 00:05:16.887 }, 00:05:16.887 "bdev_nvme": { 00:05:16.887 "mask": "0x4000", 00:05:16.887 "tpoint_mask": "0x0" 00:05:16.887 }, 00:05:16.887 "sock": { 00:05:16.887 "mask": "0x8000", 00:05:16.887 "tpoint_mask": "0x0" 00:05:16.887 } 00:05:16.887 }' 00:05:16.887 22:02:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:16.887 22:02:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:16.887 22:02:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:17.148 22:02:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:17.148 22:02:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:17.148 22:02:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:17.148 22:02:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:17.148 22:02:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:17.148 22:02:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:17.148 22:02:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:17.148 00:05:17.148 real 0m0.212s 00:05:17.148 user 0m0.181s 00:05:17.148 sys 0m0.024s 00:05:17.148 22:02:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.148 22:02:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.148 ************************************ 00:05:17.148 END TEST rpc_trace_cmd_test 00:05:17.148 ************************************ 00:05:17.148 22:02:42 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.148 22:02:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:17.148 22:02:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:17.148 22:02:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:17.148 22:02:42 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.148 22:02:42 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.148 22:02:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.148 ************************************ 00:05:17.148 START TEST rpc_daemon_integrity 00:05:17.148 ************************************ 00:05:17.148 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:17.148 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.148 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.148 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.148 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.148 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.148 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:17.148 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.148 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.148 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.148 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.149 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.149 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:17.149 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.149 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.149 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.410 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.410 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:17.410 { 00:05:17.410 "name": "Malloc2", 00:05:17.410 "aliases": [ 00:05:17.410 "cc3533c2-968f-4837-9e57-26bf1d6e5421" 00:05:17.410 ], 00:05:17.410 "product_name": "Malloc disk", 00:05:17.410 "block_size": 512, 00:05:17.410 "num_blocks": 16384, 00:05:17.410 "uuid": "cc3533c2-968f-4837-9e57-26bf1d6e5421", 00:05:17.410 "assigned_rate_limits": { 00:05:17.410 "rw_ios_per_sec": 0, 00:05:17.410 "rw_mbytes_per_sec": 0, 00:05:17.410 "r_mbytes_per_sec": 0, 00:05:17.410 "w_mbytes_per_sec": 0 00:05:17.410 }, 00:05:17.410 "claimed": false, 00:05:17.410 "zoned": false, 00:05:17.410 "supported_io_types": { 00:05:17.410 "read": true, 00:05:17.410 "write": true, 00:05:17.410 "unmap": true, 00:05:17.410 "flush": true, 00:05:17.410 "reset": true, 00:05:17.410 "nvme_admin": false, 00:05:17.410 "nvme_io": false, 00:05:17.410 "nvme_io_md": false, 00:05:17.410 "write_zeroes": true, 00:05:17.410 "zcopy": true, 00:05:17.410 "get_zone_info": false, 00:05:17.410 "zone_management": false, 00:05:17.410 "zone_append": false, 00:05:17.410 "compare": false, 00:05:17.410 "compare_and_write": false, 00:05:17.410 "abort": true, 00:05:17.410 "seek_hole": false, 00:05:17.410 "seek_data": false, 00:05:17.410 "copy": true, 00:05:17.410 "nvme_iov_md": false 00:05:17.410 }, 00:05:17.410 "memory_domains": [ 00:05:17.410 { 00:05:17.410 "dma_device_id": "system", 00:05:17.410 "dma_device_type": 1 00:05:17.410 }, 00:05:17.410 { 00:05:17.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.410 "dma_device_type": 2 00:05:17.410 } 00:05:17.410 ], 00:05:17.410 "driver_specific": {} 00:05:17.410 } 00:05:17.410 ]' 00:05:17.410 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:17.410 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.410 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:17.410 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.410 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.410 [2024-07-15 22:02:42.529593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:17.410 [2024-07-15 22:02:42.529622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.410 [2024-07-15 22:02:42.529634] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2279a90 00:05:17.410 [2024-07-15 22:02:42.529641] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.410 [2024-07-15 22:02:42.530881] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.410 [2024-07-15 22:02:42.530901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.410 Passthru0 00:05:17.410 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.410 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.410 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.410 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.410 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.410 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.410 { 00:05:17.410 "name": "Malloc2", 00:05:17.410 "aliases": [ 00:05:17.410 "cc3533c2-968f-4837-9e57-26bf1d6e5421" 00:05:17.410 ], 00:05:17.410 "product_name": "Malloc disk", 00:05:17.410 "block_size": 512, 00:05:17.410 "num_blocks": 16384, 00:05:17.410 "uuid": "cc3533c2-968f-4837-9e57-26bf1d6e5421", 00:05:17.410 "assigned_rate_limits": { 00:05:17.410 "rw_ios_per_sec": 0, 00:05:17.410 "rw_mbytes_per_sec": 0, 00:05:17.410 "r_mbytes_per_sec": 0, 00:05:17.410 "w_mbytes_per_sec": 0 00:05:17.410 }, 00:05:17.410 "claimed": true, 00:05:17.410 "claim_type": "exclusive_write", 00:05:17.410 "zoned": false, 00:05:17.410 "supported_io_types": { 00:05:17.410 "read": true, 00:05:17.410 "write": true, 00:05:17.410 "unmap": true, 00:05:17.410 "flush": true, 00:05:17.410 "reset": true, 00:05:17.410 "nvme_admin": false, 00:05:17.410 "nvme_io": false, 00:05:17.410 "nvme_io_md": false, 00:05:17.410 "write_zeroes": true, 00:05:17.410 "zcopy": true, 00:05:17.410 "get_zone_info": false, 00:05:17.410 "zone_management": false, 00:05:17.410 "zone_append": false, 00:05:17.410 "compare": false, 00:05:17.410 "compare_and_write": false, 00:05:17.410 "abort": true, 00:05:17.410 "seek_hole": false, 00:05:17.410 "seek_data": false, 00:05:17.410 "copy": true, 00:05:17.410 "nvme_iov_md": false 00:05:17.410 }, 00:05:17.410 "memory_domains": [ 00:05:17.410 { 00:05:17.410 "dma_device_id": "system", 00:05:17.410 "dma_device_type": 1 00:05:17.410 }, 00:05:17.410 { 00:05:17.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.410 "dma_device_type": 2 00:05:17.410 } 00:05:17.410 ], 00:05:17.410 "driver_specific": {} 00:05:17.410 }, 00:05:17.410 { 00:05:17.410 "name": "Passthru0", 00:05:17.410 "aliases": [ 00:05:17.410 "b89baf4e-d191-599b-b733-70150038d4db" 00:05:17.410 ], 00:05:17.410 "product_name": "passthru", 00:05:17.410 "block_size": 512, 00:05:17.410 "num_blocks": 16384, 00:05:17.410 "uuid": "b89baf4e-d191-599b-b733-70150038d4db", 00:05:17.410 "assigned_rate_limits": { 00:05:17.410 "rw_ios_per_sec": 0, 00:05:17.410 "rw_mbytes_per_sec": 0, 00:05:17.410 "r_mbytes_per_sec": 0, 00:05:17.410 "w_mbytes_per_sec": 0 00:05:17.410 }, 00:05:17.410 "claimed": false, 00:05:17.410 "zoned": false, 00:05:17.410 "supported_io_types": { 00:05:17.410 "read": true, 00:05:17.410 "write": true, 00:05:17.410 "unmap": true, 00:05:17.410 "flush": true, 00:05:17.410 "reset": true, 00:05:17.410 "nvme_admin": false, 00:05:17.410 "nvme_io": false, 00:05:17.410 "nvme_io_md": false, 00:05:17.411 "write_zeroes": true, 00:05:17.411 "zcopy": true, 00:05:17.411 "get_zone_info": false, 00:05:17.411 "zone_management": false, 00:05:17.411 "zone_append": false, 00:05:17.411 "compare": false, 00:05:17.411 "compare_and_write": false, 00:05:17.411 "abort": true, 00:05:17.411 "seek_hole": false, 00:05:17.411 "seek_data": false, 00:05:17.411 "copy": true, 00:05:17.411 "nvme_iov_md": false 00:05:17.411 }, 00:05:17.411 "memory_domains": [ 00:05:17.411 { 00:05:17.411 "dma_device_id": "system", 00:05:17.411 "dma_device_type": 1 00:05:17.411 }, 00:05:17.411 { 00:05:17.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.411 "dma_device_type": 2 00:05:17.411 } 00:05:17.411 ], 00:05:17.411 "driver_specific": { 00:05:17.411 "passthru": { 00:05:17.411 "name": "Passthru0", 00:05:17.411 "base_bdev_name": "Malloc2" 00:05:17.411 } 00:05:17.411 } 00:05:17.411 } 00:05:17.411 ]' 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.411 00:05:17.411 real 0m0.292s 00:05:17.411 user 0m0.179s 00:05:17.411 sys 0m0.045s 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.411 22:02:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.411 ************************************ 00:05:17.411 END TEST rpc_daemon_integrity 00:05:17.411 ************************************ 00:05:17.411 22:02:42 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.411 22:02:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:17.411 22:02:42 rpc -- rpc/rpc.sh@84 -- # killprocess 2553048 00:05:17.411 22:02:42 rpc -- common/autotest_common.sh@948 -- # '[' -z 2553048 ']' 00:05:17.411 22:02:42 rpc -- common/autotest_common.sh@952 -- # kill -0 2553048 00:05:17.411 22:02:42 rpc -- common/autotest_common.sh@953 -- # uname 00:05:17.411 22:02:42 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.411 22:02:42 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2553048 00:05:17.672 22:02:42 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.672 22:02:42 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.672 22:02:42 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2553048' 00:05:17.672 killing process with pid 2553048 00:05:17.672 22:02:42 rpc -- common/autotest_common.sh@967 -- # kill 2553048 00:05:17.672 22:02:42 rpc -- common/autotest_common.sh@972 -- # wait 2553048 00:05:17.672 00:05:17.672 real 0m2.360s 00:05:17.672 user 0m3.061s 00:05:17.672 sys 0m0.679s 00:05:17.672 22:02:42 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.672 22:02:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.672 ************************************ 00:05:17.672 END TEST rpc 00:05:17.672 ************************************ 00:05:17.933 22:02:43 -- common/autotest_common.sh@1142 -- # return 0 00:05:17.933 22:02:43 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:17.933 22:02:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.933 22:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.933 22:02:43 -- common/autotest_common.sh@10 -- # set +x 00:05:17.933 ************************************ 00:05:17.933 START TEST skip_rpc 00:05:17.933 ************************************ 00:05:17.933 22:02:43 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:17.933 * Looking for test storage... 00:05:17.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:17.933 22:02:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:17.933 22:02:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:17.933 22:02:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:17.933 22:02:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.933 22:02:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.933 22:02:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.933 ************************************ 00:05:17.933 START TEST skip_rpc 00:05:17.933 ************************************ 00:05:17.933 22:02:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:17.933 22:02:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2553830 00:05:17.933 22:02:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.933 22:02:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:17.933 22:02:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:18.193 [2024-07-15 22:02:43.262552] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:05:18.193 [2024-07-15 22:02:43.262614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2553830 ] 00:05:18.193 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.193 [2024-07-15 22:02:43.327806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.193 [2024-07-15 22:02:43.401812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2553830 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2553830 ']' 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2553830 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2553830 00:05:23.478 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.479 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.479 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2553830' 00:05:23.479 killing process with pid 2553830 00:05:23.479 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2553830 00:05:23.479 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2553830 00:05:23.479 00:05:23.479 real 0m5.276s 00:05:23.479 user 0m5.075s 00:05:23.479 sys 0m0.241s 00:05:23.479 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.479 22:02:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.479 ************************************ 00:05:23.479 END TEST skip_rpc 00:05:23.479 ************************************ 00:05:23.479 22:02:48 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:23.479 22:02:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:23.479 22:02:48 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.479 22:02:48 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.479 22:02:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.479 ************************************ 00:05:23.479 START TEST skip_rpc_with_json 00:05:23.479 ************************************ 00:05:23.479 22:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:23.479 22:02:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:23.479 22:02:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2554919 00:05:23.479 22:02:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.479 22:02:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2554919 00:05:23.479 22:02:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.479 22:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2554919 ']' 00:05:23.479 22:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.479 22:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.479 22:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.479 22:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.479 22:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.479 [2024-07-15 22:02:48.614723] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:05:23.479 [2024-07-15 22:02:48.614783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2554919 ] 00:05:23.479 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.479 [2024-07-15 22:02:48.679793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.479 [2024-07-15 22:02:48.753500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.438 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.438 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:24.438 22:02:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:24.438 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.438 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.438 [2024-07-15 22:02:49.387868] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:24.438 request: 00:05:24.438 { 00:05:24.438 "trtype": "tcp", 00:05:24.438 "method": "nvmf_get_transports", 00:05:24.438 "req_id": 1 00:05:24.438 } 00:05:24.438 Got JSON-RPC error response 00:05:24.438 response: 00:05:24.438 { 00:05:24.438 "code": -19, 00:05:24.438 "message": "No such device" 00:05:24.438 } 00:05:24.438 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:24.438 22:02:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:24.438 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.439 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.439 [2024-07-15 22:02:49.399996] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.439 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.439 22:02:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:24.439 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.439 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.439 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.439 22:02:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:24.439 { 00:05:24.439 "subsystems": [ 00:05:24.439 { 00:05:24.439 "subsystem": "vfio_user_target", 00:05:24.439 "config": null 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "subsystem": "keyring", 00:05:24.439 "config": [] 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "subsystem": "iobuf", 00:05:24.439 "config": [ 00:05:24.439 { 00:05:24.439 "method": "iobuf_set_options", 00:05:24.439 "params": { 00:05:24.439 "small_pool_count": 8192, 00:05:24.439 "large_pool_count": 1024, 00:05:24.439 "small_bufsize": 8192, 00:05:24.439 "large_bufsize": 135168 00:05:24.439 } 00:05:24.439 } 00:05:24.439 ] 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "subsystem": "sock", 00:05:24.439 "config": [ 00:05:24.439 { 00:05:24.439 "method": "sock_set_default_impl", 00:05:24.439 "params": { 00:05:24.439 "impl_name": "posix" 00:05:24.439 } 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "method": "sock_impl_set_options", 00:05:24.439 "params": { 00:05:24.439 "impl_name": "ssl", 00:05:24.439 "recv_buf_size": 4096, 00:05:24.439 "send_buf_size": 4096, 00:05:24.439 "enable_recv_pipe": true, 00:05:24.439 "enable_quickack": false, 00:05:24.439 "enable_placement_id": 0, 00:05:24.439 "enable_zerocopy_send_server": true, 00:05:24.439 "enable_zerocopy_send_client": false, 00:05:24.439 "zerocopy_threshold": 0, 00:05:24.439 "tls_version": 0, 00:05:24.439 "enable_ktls": false 00:05:24.439 } 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "method": "sock_impl_set_options", 00:05:24.439 "params": { 00:05:24.439 "impl_name": "posix", 00:05:24.439 "recv_buf_size": 2097152, 00:05:24.439 "send_buf_size": 2097152, 00:05:24.439 "enable_recv_pipe": true, 00:05:24.439 "enable_quickack": false, 00:05:24.439 "enable_placement_id": 0, 00:05:24.439 "enable_zerocopy_send_server": true, 00:05:24.439 "enable_zerocopy_send_client": false, 00:05:24.439 "zerocopy_threshold": 0, 00:05:24.439 "tls_version": 0, 00:05:24.439 "enable_ktls": false 00:05:24.439 } 00:05:24.439 } 00:05:24.439 ] 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "subsystem": "vmd", 00:05:24.439 "config": [] 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "subsystem": "accel", 00:05:24.439 "config": [ 00:05:24.439 { 00:05:24.439 "method": "accel_set_options", 00:05:24.439 "params": { 00:05:24.439 "small_cache_size": 128, 00:05:24.439 "large_cache_size": 16, 00:05:24.439 "task_count": 2048, 00:05:24.439 "sequence_count": 2048, 00:05:24.439 "buf_count": 2048 00:05:24.439 } 00:05:24.439 } 00:05:24.439 ] 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "subsystem": "bdev", 00:05:24.439 "config": [ 00:05:24.439 { 00:05:24.439 "method": "bdev_set_options", 00:05:24.439 "params": { 00:05:24.439 "bdev_io_pool_size": 65535, 00:05:24.439 "bdev_io_cache_size": 256, 00:05:24.439 "bdev_auto_examine": true, 00:05:24.439 "iobuf_small_cache_size": 128, 00:05:24.439 "iobuf_large_cache_size": 16 00:05:24.439 } 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "method": "bdev_raid_set_options", 00:05:24.439 "params": { 00:05:24.439 "process_window_size_kb": 1024 00:05:24.439 } 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "method": "bdev_iscsi_set_options", 00:05:24.439 "params": { 00:05:24.439 "timeout_sec": 30 00:05:24.439 } 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "method": "bdev_nvme_set_options", 00:05:24.439 "params": { 00:05:24.439 "action_on_timeout": "none", 00:05:24.439 "timeout_us": 0, 00:05:24.439 "timeout_admin_us": 0, 00:05:24.439 "keep_alive_timeout_ms": 10000, 00:05:24.439 "arbitration_burst": 0, 00:05:24.439 "low_priority_weight": 0, 00:05:24.439 "medium_priority_weight": 0, 00:05:24.439 "high_priority_weight": 0, 00:05:24.439 "nvme_adminq_poll_period_us": 10000, 00:05:24.439 "nvme_ioq_poll_period_us": 0, 00:05:24.439 "io_queue_requests": 0, 00:05:24.439 "delay_cmd_submit": true, 00:05:24.439 "transport_retry_count": 4, 00:05:24.439 "bdev_retry_count": 3, 00:05:24.439 "transport_ack_timeout": 0, 00:05:24.439 "ctrlr_loss_timeout_sec": 0, 00:05:24.439 "reconnect_delay_sec": 0, 00:05:24.439 "fast_io_fail_timeout_sec": 0, 00:05:24.439 "disable_auto_failback": false, 00:05:24.439 "generate_uuids": false, 00:05:24.439 "transport_tos": 0, 00:05:24.439 "nvme_error_stat": false, 00:05:24.439 "rdma_srq_size": 0, 00:05:24.439 "io_path_stat": false, 00:05:24.439 "allow_accel_sequence": false, 00:05:24.439 "rdma_max_cq_size": 0, 00:05:24.439 "rdma_cm_event_timeout_ms": 0, 00:05:24.439 "dhchap_digests": [ 00:05:24.439 "sha256", 00:05:24.439 "sha384", 00:05:24.439 "sha512" 00:05:24.439 ], 00:05:24.439 "dhchap_dhgroups": [ 00:05:24.439 "null", 00:05:24.439 "ffdhe2048", 00:05:24.439 "ffdhe3072", 00:05:24.439 "ffdhe4096", 00:05:24.439 "ffdhe6144", 00:05:24.439 "ffdhe8192" 00:05:24.439 ] 00:05:24.439 } 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "method": "bdev_nvme_set_hotplug", 00:05:24.439 "params": { 00:05:24.439 "period_us": 100000, 00:05:24.439 "enable": false 00:05:24.439 } 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "method": "bdev_wait_for_examine" 00:05:24.439 } 00:05:24.439 ] 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "subsystem": "scsi", 00:05:24.439 "config": null 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "subsystem": "scheduler", 00:05:24.439 "config": [ 00:05:24.439 { 00:05:24.439 "method": "framework_set_scheduler", 00:05:24.439 "params": { 00:05:24.439 "name": "static" 00:05:24.439 } 00:05:24.439 } 00:05:24.439 ] 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "subsystem": "vhost_scsi", 00:05:24.439 "config": [] 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "subsystem": "vhost_blk", 00:05:24.439 "config": [] 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "subsystem": "ublk", 00:05:24.439 "config": [] 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "subsystem": "nbd", 00:05:24.439 "config": [] 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "subsystem": "nvmf", 00:05:24.439 "config": [ 00:05:24.439 { 00:05:24.439 "method": "nvmf_set_config", 00:05:24.439 "params": { 00:05:24.439 "discovery_filter": "match_any", 00:05:24.439 "admin_cmd_passthru": { 00:05:24.439 "identify_ctrlr": false 00:05:24.439 } 00:05:24.439 } 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "method": "nvmf_set_max_subsystems", 00:05:24.439 "params": { 00:05:24.439 "max_subsystems": 1024 00:05:24.439 } 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "method": "nvmf_set_crdt", 00:05:24.439 "params": { 00:05:24.439 "crdt1": 0, 00:05:24.439 "crdt2": 0, 00:05:24.439 "crdt3": 0 00:05:24.439 } 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "method": "nvmf_create_transport", 00:05:24.439 "params": { 00:05:24.439 "trtype": "TCP", 00:05:24.439 "max_queue_depth": 128, 00:05:24.439 "max_io_qpairs_per_ctrlr": 127, 00:05:24.439 "in_capsule_data_size": 4096, 00:05:24.439 "max_io_size": 131072, 00:05:24.439 "io_unit_size": 131072, 00:05:24.439 "max_aq_depth": 128, 00:05:24.439 "num_shared_buffers": 511, 00:05:24.439 "buf_cache_size": 4294967295, 00:05:24.439 "dif_insert_or_strip": false, 00:05:24.439 "zcopy": false, 00:05:24.439 "c2h_success": true, 00:05:24.439 "sock_priority": 0, 00:05:24.439 "abort_timeout_sec": 1, 00:05:24.439 "ack_timeout": 0, 00:05:24.439 "data_wr_pool_size": 0 00:05:24.439 } 00:05:24.439 } 00:05:24.439 ] 00:05:24.439 }, 00:05:24.439 { 00:05:24.439 "subsystem": "iscsi", 00:05:24.439 "config": [ 00:05:24.439 { 00:05:24.439 "method": "iscsi_set_options", 00:05:24.439 "params": { 00:05:24.439 "node_base": "iqn.2016-06.io.spdk", 00:05:24.439 "max_sessions": 128, 00:05:24.439 "max_connections_per_session": 2, 00:05:24.439 "max_queue_depth": 64, 00:05:24.439 "default_time2wait": 2, 00:05:24.439 "default_time2retain": 20, 00:05:24.439 "first_burst_length": 8192, 00:05:24.439 "immediate_data": true, 00:05:24.439 "allow_duplicated_isid": false, 00:05:24.439 "error_recovery_level": 0, 00:05:24.439 "nop_timeout": 60, 00:05:24.439 "nop_in_interval": 30, 00:05:24.439 "disable_chap": false, 00:05:24.439 "require_chap": false, 00:05:24.439 "mutual_chap": false, 00:05:24.439 "chap_group": 0, 00:05:24.439 "max_large_datain_per_connection": 64, 00:05:24.439 "max_r2t_per_connection": 4, 00:05:24.439 "pdu_pool_size": 36864, 00:05:24.439 "immediate_data_pool_size": 16384, 00:05:24.439 "data_out_pool_size": 2048 00:05:24.439 } 00:05:24.439 } 00:05:24.439 ] 00:05:24.439 } 00:05:24.439 ] 00:05:24.439 } 00:05:24.439 22:02:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:24.440 22:02:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2554919 00:05:24.440 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2554919 ']' 00:05:24.440 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2554919 00:05:24.440 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:24.440 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.440 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2554919 00:05:24.440 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.440 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.440 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2554919' 00:05:24.440 killing process with pid 2554919 00:05:24.440 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2554919 00:05:24.440 22:02:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2554919 00:05:24.745 22:02:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2555208 00:05:24.745 22:02:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:24.745 22:02:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:30.035 22:02:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2555208 00:05:30.035 22:02:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2555208 ']' 00:05:30.035 22:02:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2555208 00:05:30.035 22:02:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:30.035 22:02:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.035 22:02:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2555208 00:05:30.035 22:02:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.035 22:02:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.035 22:02:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2555208' 00:05:30.035 killing process with pid 2555208 00:05:30.035 22:02:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2555208 00:05:30.035 22:02:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2555208 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:30.035 00:05:30.035 real 0m6.559s 00:05:30.035 user 0m6.456s 00:05:30.035 sys 0m0.535s 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.035 ************************************ 00:05:30.035 END TEST skip_rpc_with_json 00:05:30.035 ************************************ 00:05:30.035 22:02:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.035 22:02:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:30.035 22:02:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.035 22:02:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.035 22:02:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.035 ************************************ 00:05:30.035 START TEST skip_rpc_with_delay 00:05:30.035 ************************************ 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.035 [2024-07-15 22:02:55.238954] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:30.035 [2024-07-15 22:02:55.239028] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:30.035 00:05:30.035 real 0m0.071s 00:05:30.035 user 0m0.041s 00:05:30.035 sys 0m0.029s 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.035 22:02:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:30.035 ************************************ 00:05:30.035 END TEST skip_rpc_with_delay 00:05:30.035 ************************************ 00:05:30.035 22:02:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.035 22:02:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:30.035 22:02:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:30.035 22:02:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:30.035 22:02:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.035 22:02:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.035 22:02:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.035 ************************************ 00:05:30.035 START TEST exit_on_failed_rpc_init 00:05:30.035 ************************************ 00:05:30.035 22:02:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:30.035 22:02:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2556328 00:05:30.035 22:02:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2556328 00:05:30.035 22:02:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.035 22:02:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2556328 ']' 00:05:30.035 22:02:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.035 22:02:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.035 22:02:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.035 22:02:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.035 22:02:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.295 [2024-07-15 22:02:55.388892] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:05:30.296 [2024-07-15 22:02:55.388950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2556328 ] 00:05:30.296 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.296 [2024-07-15 22:02:55.449068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.296 [2024-07-15 22:02:55.520105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.885 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.885 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:30.885 22:02:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.885 22:02:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.885 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:30.885 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.885 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.885 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.885 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.885 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.885 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.885 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.885 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.885 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:30.885 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.885 [2024-07-15 22:02:56.207703] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:05:30.885 [2024-07-15 22:02:56.207754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2556385 ] 00:05:31.144 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.144 [2024-07-15 22:02:56.282654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.144 [2024-07-15 22:02:56.347437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.144 [2024-07-15 22:02:56.347495] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:31.144 [2024-07-15 22:02:56.347504] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:31.144 [2024-07-15 22:02:56.347511] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2556328 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2556328 ']' 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2556328 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2556328 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2556328' 00:05:31.144 killing process with pid 2556328 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2556328 00:05:31.144 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2556328 00:05:31.405 00:05:31.405 real 0m1.340s 00:05:31.405 user 0m1.576s 00:05:31.405 sys 0m0.363s 00:05:31.405 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.405 22:02:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.405 ************************************ 00:05:31.405 END TEST exit_on_failed_rpc_init 00:05:31.405 ************************************ 00:05:31.405 22:02:56 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:31.405 22:02:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:31.405 00:05:31.405 real 0m13.655s 00:05:31.405 user 0m13.297s 00:05:31.405 sys 0m1.452s 00:05:31.405 22:02:56 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.405 22:02:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.405 ************************************ 00:05:31.405 END TEST skip_rpc 00:05:31.405 ************************************ 00:05:31.664 22:02:56 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.664 22:02:56 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.664 22:02:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.664 22:02:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.664 22:02:56 -- common/autotest_common.sh@10 -- # set +x 00:05:31.664 ************************************ 00:05:31.664 START TEST rpc_client 00:05:31.664 ************************************ 00:05:31.665 22:02:56 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.665 * Looking for test storage... 00:05:31.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:31.665 22:02:56 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:31.665 OK 00:05:31.665 22:02:56 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:31.665 00:05:31.665 real 0m0.123s 00:05:31.665 user 0m0.053s 00:05:31.665 sys 0m0.076s 00:05:31.665 22:02:56 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.665 22:02:56 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:31.665 ************************************ 00:05:31.665 END TEST rpc_client 00:05:31.665 ************************************ 00:05:31.665 22:02:56 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.665 22:02:56 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:31.665 22:02:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.665 22:02:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.665 22:02:56 -- common/autotest_common.sh@10 -- # set +x 00:05:31.665 ************************************ 00:05:31.665 START TEST json_config 00:05:31.665 ************************************ 00:05:31.665 22:02:56 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:31.926 22:02:57 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.926 22:02:57 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.926 22:02:57 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.926 22:02:57 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.926 22:02:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.926 22:02:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.926 22:02:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.926 22:02:57 json_config -- paths/export.sh@5 -- # export PATH 00:05:31.926 22:02:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@47 -- # : 0 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:31.926 22:02:57 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:31.926 22:02:57 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:31.926 22:02:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:31.926 22:02:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:31.926 22:02:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:31.926 22:02:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:31.926 22:02:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:31.926 22:02:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:31.926 22:02:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:31.926 22:02:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:31.926 22:02:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:31.926 22:02:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:31.926 22:02:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:31.926 22:02:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:31.926 22:02:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:31.927 22:02:57 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.927 22:02:57 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:31.927 INFO: JSON configuration test init 00:05:31.927 22:02:57 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:31.927 22:02:57 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:31.927 22:02:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.927 22:02:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.927 22:02:57 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:31.927 22:02:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.927 22:02:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.927 22:02:57 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:31.927 22:02:57 json_config -- json_config/common.sh@9 -- # local app=target 00:05:31.927 22:02:57 json_config -- json_config/common.sh@10 -- # shift 00:05:31.927 22:02:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.927 22:02:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.927 22:02:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.927 22:02:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.927 22:02:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.927 22:02:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2556782 00:05:31.927 22:02:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.927 Waiting for target to run... 00:05:31.927 22:02:57 json_config -- json_config/common.sh@25 -- # waitforlisten 2556782 /var/tmp/spdk_tgt.sock 00:05:31.927 22:02:57 json_config -- common/autotest_common.sh@829 -- # '[' -z 2556782 ']' 00:05:31.927 22:02:57 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.927 22:02:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:31.927 22:02:57 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.927 22:02:57 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.927 22:02:57 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.927 22:02:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.927 [2024-07-15 22:02:57.160935] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:05:31.927 [2024-07-15 22:02:57.161005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2556782 ] 00:05:31.927 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.187 [2024-07-15 22:02:57.470191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.498 [2024-07-15 22:02:57.530040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.757 22:02:57 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.757 22:02:57 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:32.757 22:02:57 json_config -- json_config/common.sh@26 -- # echo '' 00:05:32.757 00:05:32.757 22:02:57 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:32.757 22:02:57 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:32.757 22:02:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.757 22:02:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.757 22:02:57 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:32.757 22:02:57 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:32.757 22:02:57 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.757 22:02:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.757 22:02:57 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:32.757 22:02:57 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:32.757 22:02:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:33.325 22:02:58 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:33.325 22:02:58 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:33.325 22:02:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.325 22:02:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.325 22:02:58 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:33.325 22:02:58 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:33.325 22:02:58 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:33.325 22:02:58 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:33.325 22:02:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:33.325 22:02:58 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:33.584 22:02:58 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:33.584 22:02:58 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:33.584 22:02:58 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:33.584 22:02:58 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:33.584 22:02:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.584 22:02:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.584 22:02:58 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:33.584 22:02:58 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:33.584 22:02:58 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:33.584 22:02:58 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:33.584 22:02:58 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:33.584 22:02:58 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:33.584 22:02:58 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:33.584 22:02:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.584 22:02:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.584 22:02:58 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:33.584 22:02:58 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:33.584 22:02:58 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:33.584 22:02:58 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.584 22:02:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.584 MallocForNvmf0 00:05:33.584 22:02:58 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.584 22:02:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.844 MallocForNvmf1 00:05:33.844 22:02:59 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.844 22:02:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:34.104 [2024-07-15 22:02:59.171179] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.104 22:02:59 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.104 22:02:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.104 22:02:59 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.104 22:02:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.364 22:02:59 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.364 22:02:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.364 22:02:59 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.364 22:02:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.624 [2024-07-15 22:02:59.813238] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:34.624 22:02:59 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:34.624 22:02:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.624 22:02:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.624 22:02:59 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:34.624 22:02:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.624 22:02:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.624 22:02:59 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:34.624 22:02:59 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.624 22:02:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.884 MallocBdevForConfigChangeCheck 00:05:34.884 22:03:00 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:34.884 22:03:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.884 22:03:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.884 22:03:00 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:34.884 22:03:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.145 22:03:00 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:35.145 INFO: shutting down applications... 00:05:35.145 22:03:00 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:35.145 22:03:00 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:35.145 22:03:00 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:35.145 22:03:00 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:35.715 Calling clear_iscsi_subsystem 00:05:35.715 Calling clear_nvmf_subsystem 00:05:35.715 Calling clear_nbd_subsystem 00:05:35.715 Calling clear_ublk_subsystem 00:05:35.715 Calling clear_vhost_blk_subsystem 00:05:35.715 Calling clear_vhost_scsi_subsystem 00:05:35.715 Calling clear_bdev_subsystem 00:05:35.715 22:03:00 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:35.715 22:03:00 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:35.715 22:03:00 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:35.715 22:03:00 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.715 22:03:00 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:35.715 22:03:00 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:35.976 22:03:01 json_config -- json_config/json_config.sh@345 -- # break 00:05:35.976 22:03:01 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:35.976 22:03:01 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:35.976 22:03:01 json_config -- json_config/common.sh@31 -- # local app=target 00:05:35.976 22:03:01 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.976 22:03:01 json_config -- json_config/common.sh@35 -- # [[ -n 2556782 ]] 00:05:35.976 22:03:01 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2556782 00:05:35.976 22:03:01 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.976 22:03:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.976 22:03:01 json_config -- json_config/common.sh@41 -- # kill -0 2556782 00:05:35.976 22:03:01 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.547 22:03:01 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.547 22:03:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.547 22:03:01 json_config -- json_config/common.sh@41 -- # kill -0 2556782 00:05:36.547 22:03:01 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:36.547 22:03:01 json_config -- json_config/common.sh@43 -- # break 00:05:36.547 22:03:01 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:36.547 22:03:01 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:36.547 SPDK target shutdown done 00:05:36.547 22:03:01 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:36.547 INFO: relaunching applications... 00:05:36.547 22:03:01 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.547 22:03:01 json_config -- json_config/common.sh@9 -- # local app=target 00:05:36.547 22:03:01 json_config -- json_config/common.sh@10 -- # shift 00:05:36.547 22:03:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.547 22:03:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.547 22:03:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.547 22:03:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.547 22:03:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.547 22:03:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2557959 00:05:36.547 22:03:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.547 Waiting for target to run... 00:05:36.547 22:03:01 json_config -- json_config/common.sh@25 -- # waitforlisten 2557959 /var/tmp/spdk_tgt.sock 00:05:36.547 22:03:01 json_config -- common/autotest_common.sh@829 -- # '[' -z 2557959 ']' 00:05:36.547 22:03:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.547 22:03:01 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.547 22:03:01 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.547 22:03:01 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.547 22:03:01 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.547 22:03:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.547 [2024-07-15 22:03:01.732188] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:05:36.547 [2024-07-15 22:03:01.732273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2557959 ] 00:05:36.547 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.119 [2024-07-15 22:03:02.155422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.119 [2024-07-15 22:03:02.217070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.688 [2024-07-15 22:03:02.713542] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.688 [2024-07-15 22:03:02.745893] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:37.688 22:03:02 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.688 22:03:02 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:37.688 22:03:02 json_config -- json_config/common.sh@26 -- # echo '' 00:05:37.688 00:05:37.688 22:03:02 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:37.688 22:03:02 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:37.688 INFO: Checking if target configuration is the same... 00:05:37.688 22:03:02 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.688 22:03:02 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:37.689 22:03:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.689 + '[' 2 -ne 2 ']' 00:05:37.689 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:37.689 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:37.689 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:37.689 +++ basename /dev/fd/62 00:05:37.689 ++ mktemp /tmp/62.XXX 00:05:37.689 + tmp_file_1=/tmp/62.5Nb 00:05:37.689 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.689 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:37.689 + tmp_file_2=/tmp/spdk_tgt_config.json.GtG 00:05:37.689 + ret=0 00:05:37.689 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:37.948 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:37.948 + diff -u /tmp/62.5Nb /tmp/spdk_tgt_config.json.GtG 00:05:37.948 + echo 'INFO: JSON config files are the same' 00:05:37.948 INFO: JSON config files are the same 00:05:37.948 + rm /tmp/62.5Nb /tmp/spdk_tgt_config.json.GtG 00:05:37.948 + exit 0 00:05:37.948 22:03:03 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:37.948 22:03:03 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:37.948 INFO: changing configuration and checking if this can be detected... 00:05:37.948 22:03:03 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:37.948 22:03:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.208 22:03:03 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.208 22:03:03 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:38.208 22:03:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.208 + '[' 2 -ne 2 ']' 00:05:38.208 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.208 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:38.208 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:38.208 +++ basename /dev/fd/62 00:05:38.208 ++ mktemp /tmp/62.XXX 00:05:38.208 + tmp_file_1=/tmp/62.C7w 00:05:38.208 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.208 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.208 + tmp_file_2=/tmp/spdk_tgt_config.json.9wl 00:05:38.208 + ret=0 00:05:38.208 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.467 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.467 + diff -u /tmp/62.C7w /tmp/spdk_tgt_config.json.9wl 00:05:38.467 + ret=1 00:05:38.467 + echo '=== Start of file: /tmp/62.C7w ===' 00:05:38.467 + cat /tmp/62.C7w 00:05:38.467 + echo '=== End of file: /tmp/62.C7w ===' 00:05:38.467 + echo '' 00:05:38.467 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9wl ===' 00:05:38.467 + cat /tmp/spdk_tgt_config.json.9wl 00:05:38.467 + echo '=== End of file: /tmp/spdk_tgt_config.json.9wl ===' 00:05:38.467 + echo '' 00:05:38.467 + rm /tmp/62.C7w /tmp/spdk_tgt_config.json.9wl 00:05:38.467 + exit 1 00:05:38.467 22:03:03 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:38.467 INFO: configuration change detected. 00:05:38.467 22:03:03 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:38.467 22:03:03 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:38.467 22:03:03 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.467 22:03:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.467 22:03:03 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:38.467 22:03:03 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:38.467 22:03:03 json_config -- json_config/json_config.sh@317 -- # [[ -n 2557959 ]] 00:05:38.467 22:03:03 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:38.467 22:03:03 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:38.467 22:03:03 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.467 22:03:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.467 22:03:03 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:38.467 22:03:03 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:38.467 22:03:03 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:38.467 22:03:03 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:38.467 22:03:03 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:38.467 22:03:03 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:38.467 22:03:03 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.467 22:03:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.467 22:03:03 json_config -- json_config/json_config.sh@323 -- # killprocess 2557959 00:05:38.467 22:03:03 json_config -- common/autotest_common.sh@948 -- # '[' -z 2557959 ']' 00:05:38.467 22:03:03 json_config -- common/autotest_common.sh@952 -- # kill -0 2557959 00:05:38.467 22:03:03 json_config -- common/autotest_common.sh@953 -- # uname 00:05:38.467 22:03:03 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.467 22:03:03 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2557959 00:05:38.467 22:03:03 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.467 22:03:03 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.467 22:03:03 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2557959' 00:05:38.467 killing process with pid 2557959 00:05:38.467 22:03:03 json_config -- common/autotest_common.sh@967 -- # kill 2557959 00:05:38.467 22:03:03 json_config -- common/autotest_common.sh@972 -- # wait 2557959 00:05:39.038 22:03:04 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.038 22:03:04 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:39.038 22:03:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:39.038 22:03:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.038 22:03:04 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:39.038 22:03:04 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:39.038 INFO: Success 00:05:39.038 00:05:39.038 real 0m7.132s 00:05:39.038 user 0m8.440s 00:05:39.038 sys 0m1.894s 00:05:39.038 22:03:04 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.039 22:03:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.039 ************************************ 00:05:39.039 END TEST json_config 00:05:39.039 ************************************ 00:05:39.039 22:03:04 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.039 22:03:04 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:39.039 22:03:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.039 22:03:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.039 22:03:04 -- common/autotest_common.sh@10 -- # set +x 00:05:39.039 ************************************ 00:05:39.039 START TEST json_config_extra_key 00:05:39.039 ************************************ 00:05:39.039 22:03:04 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:39.039 22:03:04 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:39.039 22:03:04 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.039 22:03:04 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.039 22:03:04 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.039 22:03:04 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.039 22:03:04 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.039 22:03:04 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.039 22:03:04 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:39.039 22:03:04 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:39.039 22:03:04 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:39.039 22:03:04 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:39.039 22:03:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:39.039 22:03:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:39.039 22:03:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:39.039 22:03:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:39.039 22:03:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:39.039 22:03:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:39.039 22:03:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:39.039 22:03:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:39.039 22:03:04 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:39.039 22:03:04 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:39.039 INFO: launching applications... 00:05:39.039 22:03:04 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:39.039 22:03:04 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:39.039 22:03:04 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:39.039 22:03:04 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:39.039 22:03:04 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:39.039 22:03:04 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:39.039 22:03:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.039 22:03:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.039 22:03:04 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2558479 00:05:39.039 22:03:04 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:39.039 Waiting for target to run... 00:05:39.039 22:03:04 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2558479 /var/tmp/spdk_tgt.sock 00:05:39.039 22:03:04 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2558479 ']' 00:05:39.039 22:03:04 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.039 22:03:04 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:39.039 22:03:04 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.039 22:03:04 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.039 22:03:04 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.039 22:03:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:39.300 [2024-07-15 22:03:04.363063] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:05:39.300 [2024-07-15 22:03:04.363150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2558479 ] 00:05:39.300 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.560 [2024-07-15 22:03:04.767018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.560 [2024-07-15 22:03:04.830917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.820 22:03:05 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.820 22:03:05 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:39.820 22:03:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:39.820 00:05:39.820 22:03:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:39.820 INFO: shutting down applications... 00:05:39.820 22:03:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:39.820 22:03:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:39.820 22:03:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:39.820 22:03:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2558479 ]] 00:05:39.820 22:03:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2558479 00:05:39.820 22:03:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:39.820 22:03:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.820 22:03:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2558479 00:05:39.820 22:03:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.391 22:03:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.391 22:03:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.391 22:03:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2558479 00:05:40.391 22:03:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:40.391 22:03:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:40.391 22:03:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:40.391 22:03:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:40.391 SPDK target shutdown done 00:05:40.391 22:03:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:40.391 Success 00:05:40.391 00:05:40.391 real 0m1.449s 00:05:40.391 user 0m0.975s 00:05:40.391 sys 0m0.508s 00:05:40.391 22:03:05 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.391 22:03:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:40.391 ************************************ 00:05:40.391 END TEST json_config_extra_key 00:05:40.391 ************************************ 00:05:40.391 22:03:05 -- common/autotest_common.sh@1142 -- # return 0 00:05:40.391 22:03:05 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.391 22:03:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.391 22:03:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.391 22:03:05 -- common/autotest_common.sh@10 -- # set +x 00:05:40.652 ************************************ 00:05:40.652 START TEST alias_rpc 00:05:40.652 ************************************ 00:05:40.652 22:03:05 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.652 * Looking for test storage... 00:05:40.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:40.652 22:03:05 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:40.652 22:03:05 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2558858 00:05:40.652 22:03:05 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2558858 00:05:40.652 22:03:05 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.652 22:03:05 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2558858 ']' 00:05:40.652 22:03:05 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.652 22:03:05 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.652 22:03:05 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.652 22:03:05 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.652 22:03:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.652 [2024-07-15 22:03:05.878447] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:05:40.652 [2024-07-15 22:03:05.878510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2558858 ] 00:05:40.652 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.652 [2024-07-15 22:03:05.944015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.912 [2024-07-15 22:03:06.017797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.483 22:03:06 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.483 22:03:06 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:41.483 22:03:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:41.780 22:03:06 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2558858 00:05:41.780 22:03:06 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2558858 ']' 00:05:41.780 22:03:06 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2558858 00:05:41.780 22:03:06 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:41.780 22:03:06 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.780 22:03:06 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2558858 00:05:41.780 22:03:06 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.780 22:03:06 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.780 22:03:06 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2558858' 00:05:41.780 killing process with pid 2558858 00:05:41.780 22:03:06 alias_rpc -- common/autotest_common.sh@967 -- # kill 2558858 00:05:41.780 22:03:06 alias_rpc -- common/autotest_common.sh@972 -- # wait 2558858 00:05:42.045 00:05:42.045 real 0m1.390s 00:05:42.045 user 0m1.527s 00:05:42.045 sys 0m0.387s 00:05:42.046 22:03:07 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.046 22:03:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.046 ************************************ 00:05:42.046 END TEST alias_rpc 00:05:42.046 ************************************ 00:05:42.046 22:03:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:42.046 22:03:07 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:42.046 22:03:07 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:42.046 22:03:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.046 22:03:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.046 22:03:07 -- common/autotest_common.sh@10 -- # set +x 00:05:42.046 ************************************ 00:05:42.046 START TEST spdkcli_tcp 00:05:42.046 ************************************ 00:05:42.046 22:03:07 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:42.046 * Looking for test storage... 00:05:42.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:42.046 22:03:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:42.046 22:03:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:42.046 22:03:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:42.046 22:03:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:42.046 22:03:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:42.046 22:03:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:42.046 22:03:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:42.046 22:03:07 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.046 22:03:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.046 22:03:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2559253 00:05:42.046 22:03:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2559253 00:05:42.046 22:03:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:42.046 22:03:07 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2559253 ']' 00:05:42.046 22:03:07 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.046 22:03:07 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.046 22:03:07 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.046 22:03:07 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.046 22:03:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.046 [2024-07-15 22:03:07.349273] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:05:42.046 [2024-07-15 22:03:07.349336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2559253 ] 00:05:42.306 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.306 [2024-07-15 22:03:07.415284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.306 [2024-07-15 22:03:07.489970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.306 [2024-07-15 22:03:07.489973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.876 22:03:08 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.876 22:03:08 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:42.876 22:03:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2559364 00:05:42.876 22:03:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:42.876 22:03:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:43.138 [ 00:05:43.138 "bdev_malloc_delete", 00:05:43.138 "bdev_malloc_create", 00:05:43.138 "bdev_null_resize", 00:05:43.138 "bdev_null_delete", 00:05:43.138 "bdev_null_create", 00:05:43.138 "bdev_nvme_cuse_unregister", 00:05:43.138 "bdev_nvme_cuse_register", 00:05:43.138 "bdev_opal_new_user", 00:05:43.138 "bdev_opal_set_lock_state", 00:05:43.138 "bdev_opal_delete", 00:05:43.138 "bdev_opal_get_info", 00:05:43.138 "bdev_opal_create", 00:05:43.138 "bdev_nvme_opal_revert", 00:05:43.138 "bdev_nvme_opal_init", 00:05:43.138 "bdev_nvme_send_cmd", 00:05:43.138 "bdev_nvme_get_path_iostat", 00:05:43.138 "bdev_nvme_get_mdns_discovery_info", 00:05:43.138 "bdev_nvme_stop_mdns_discovery", 00:05:43.138 "bdev_nvme_start_mdns_discovery", 00:05:43.138 "bdev_nvme_set_multipath_policy", 00:05:43.138 "bdev_nvme_set_preferred_path", 00:05:43.138 "bdev_nvme_get_io_paths", 00:05:43.138 "bdev_nvme_remove_error_injection", 00:05:43.138 "bdev_nvme_add_error_injection", 00:05:43.138 "bdev_nvme_get_discovery_info", 00:05:43.138 "bdev_nvme_stop_discovery", 00:05:43.138 "bdev_nvme_start_discovery", 00:05:43.138 "bdev_nvme_get_controller_health_info", 00:05:43.138 "bdev_nvme_disable_controller", 00:05:43.138 "bdev_nvme_enable_controller", 00:05:43.138 "bdev_nvme_reset_controller", 00:05:43.138 "bdev_nvme_get_transport_statistics", 00:05:43.138 "bdev_nvme_apply_firmware", 00:05:43.138 "bdev_nvme_detach_controller", 00:05:43.138 "bdev_nvme_get_controllers", 00:05:43.138 "bdev_nvme_attach_controller", 00:05:43.138 "bdev_nvme_set_hotplug", 00:05:43.138 "bdev_nvme_set_options", 00:05:43.138 "bdev_passthru_delete", 00:05:43.138 "bdev_passthru_create", 00:05:43.138 "bdev_lvol_set_parent_bdev", 00:05:43.138 "bdev_lvol_set_parent", 00:05:43.138 "bdev_lvol_check_shallow_copy", 00:05:43.138 "bdev_lvol_start_shallow_copy", 00:05:43.138 "bdev_lvol_grow_lvstore", 00:05:43.138 "bdev_lvol_get_lvols", 00:05:43.138 "bdev_lvol_get_lvstores", 00:05:43.138 "bdev_lvol_delete", 00:05:43.138 "bdev_lvol_set_read_only", 00:05:43.138 "bdev_lvol_resize", 00:05:43.138 "bdev_lvol_decouple_parent", 00:05:43.138 "bdev_lvol_inflate", 00:05:43.138 "bdev_lvol_rename", 00:05:43.138 "bdev_lvol_clone_bdev", 00:05:43.138 "bdev_lvol_clone", 00:05:43.138 "bdev_lvol_snapshot", 00:05:43.138 "bdev_lvol_create", 00:05:43.138 "bdev_lvol_delete_lvstore", 00:05:43.138 "bdev_lvol_rename_lvstore", 00:05:43.138 "bdev_lvol_create_lvstore", 00:05:43.138 "bdev_raid_set_options", 00:05:43.138 "bdev_raid_remove_base_bdev", 00:05:43.138 "bdev_raid_add_base_bdev", 00:05:43.138 "bdev_raid_delete", 00:05:43.138 "bdev_raid_create", 00:05:43.138 "bdev_raid_get_bdevs", 00:05:43.138 "bdev_error_inject_error", 00:05:43.138 "bdev_error_delete", 00:05:43.138 "bdev_error_create", 00:05:43.138 "bdev_split_delete", 00:05:43.138 "bdev_split_create", 00:05:43.138 "bdev_delay_delete", 00:05:43.138 "bdev_delay_create", 00:05:43.138 "bdev_delay_update_latency", 00:05:43.138 "bdev_zone_block_delete", 00:05:43.138 "bdev_zone_block_create", 00:05:43.138 "blobfs_create", 00:05:43.138 "blobfs_detect", 00:05:43.138 "blobfs_set_cache_size", 00:05:43.138 "bdev_aio_delete", 00:05:43.138 "bdev_aio_rescan", 00:05:43.138 "bdev_aio_create", 00:05:43.138 "bdev_ftl_set_property", 00:05:43.138 "bdev_ftl_get_properties", 00:05:43.138 "bdev_ftl_get_stats", 00:05:43.138 "bdev_ftl_unmap", 00:05:43.138 "bdev_ftl_unload", 00:05:43.138 "bdev_ftl_delete", 00:05:43.138 "bdev_ftl_load", 00:05:43.138 "bdev_ftl_create", 00:05:43.138 "bdev_virtio_attach_controller", 00:05:43.138 "bdev_virtio_scsi_get_devices", 00:05:43.138 "bdev_virtio_detach_controller", 00:05:43.138 "bdev_virtio_blk_set_hotplug", 00:05:43.138 "bdev_iscsi_delete", 00:05:43.138 "bdev_iscsi_create", 00:05:43.138 "bdev_iscsi_set_options", 00:05:43.138 "accel_error_inject_error", 00:05:43.138 "ioat_scan_accel_module", 00:05:43.138 "dsa_scan_accel_module", 00:05:43.138 "iaa_scan_accel_module", 00:05:43.138 "vfu_virtio_create_scsi_endpoint", 00:05:43.138 "vfu_virtio_scsi_remove_target", 00:05:43.138 "vfu_virtio_scsi_add_target", 00:05:43.138 "vfu_virtio_create_blk_endpoint", 00:05:43.138 "vfu_virtio_delete_endpoint", 00:05:43.138 "keyring_file_remove_key", 00:05:43.138 "keyring_file_add_key", 00:05:43.138 "keyring_linux_set_options", 00:05:43.138 "iscsi_get_histogram", 00:05:43.138 "iscsi_enable_histogram", 00:05:43.138 "iscsi_set_options", 00:05:43.138 "iscsi_get_auth_groups", 00:05:43.138 "iscsi_auth_group_remove_secret", 00:05:43.138 "iscsi_auth_group_add_secret", 00:05:43.138 "iscsi_delete_auth_group", 00:05:43.138 "iscsi_create_auth_group", 00:05:43.138 "iscsi_set_discovery_auth", 00:05:43.138 "iscsi_get_options", 00:05:43.138 "iscsi_target_node_request_logout", 00:05:43.138 "iscsi_target_node_set_redirect", 00:05:43.138 "iscsi_target_node_set_auth", 00:05:43.138 "iscsi_target_node_add_lun", 00:05:43.138 "iscsi_get_stats", 00:05:43.138 "iscsi_get_connections", 00:05:43.138 "iscsi_portal_group_set_auth", 00:05:43.138 "iscsi_start_portal_group", 00:05:43.138 "iscsi_delete_portal_group", 00:05:43.138 "iscsi_create_portal_group", 00:05:43.138 "iscsi_get_portal_groups", 00:05:43.138 "iscsi_delete_target_node", 00:05:43.138 "iscsi_target_node_remove_pg_ig_maps", 00:05:43.138 "iscsi_target_node_add_pg_ig_maps", 00:05:43.138 "iscsi_create_target_node", 00:05:43.138 "iscsi_get_target_nodes", 00:05:43.138 "iscsi_delete_initiator_group", 00:05:43.138 "iscsi_initiator_group_remove_initiators", 00:05:43.138 "iscsi_initiator_group_add_initiators", 00:05:43.138 "iscsi_create_initiator_group", 00:05:43.138 "iscsi_get_initiator_groups", 00:05:43.138 "nvmf_set_crdt", 00:05:43.138 "nvmf_set_config", 00:05:43.138 "nvmf_set_max_subsystems", 00:05:43.138 "nvmf_stop_mdns_prr", 00:05:43.138 "nvmf_publish_mdns_prr", 00:05:43.138 "nvmf_subsystem_get_listeners", 00:05:43.138 "nvmf_subsystem_get_qpairs", 00:05:43.138 "nvmf_subsystem_get_controllers", 00:05:43.138 "nvmf_get_stats", 00:05:43.138 "nvmf_get_transports", 00:05:43.138 "nvmf_create_transport", 00:05:43.138 "nvmf_get_targets", 00:05:43.138 "nvmf_delete_target", 00:05:43.138 "nvmf_create_target", 00:05:43.138 "nvmf_subsystem_allow_any_host", 00:05:43.138 "nvmf_subsystem_remove_host", 00:05:43.138 "nvmf_subsystem_add_host", 00:05:43.138 "nvmf_ns_remove_host", 00:05:43.138 "nvmf_ns_add_host", 00:05:43.138 "nvmf_subsystem_remove_ns", 00:05:43.138 "nvmf_subsystem_add_ns", 00:05:43.138 "nvmf_subsystem_listener_set_ana_state", 00:05:43.138 "nvmf_discovery_get_referrals", 00:05:43.138 "nvmf_discovery_remove_referral", 00:05:43.138 "nvmf_discovery_add_referral", 00:05:43.138 "nvmf_subsystem_remove_listener", 00:05:43.138 "nvmf_subsystem_add_listener", 00:05:43.138 "nvmf_delete_subsystem", 00:05:43.138 "nvmf_create_subsystem", 00:05:43.138 "nvmf_get_subsystems", 00:05:43.138 "env_dpdk_get_mem_stats", 00:05:43.138 "nbd_get_disks", 00:05:43.138 "nbd_stop_disk", 00:05:43.138 "nbd_start_disk", 00:05:43.138 "ublk_recover_disk", 00:05:43.138 "ublk_get_disks", 00:05:43.138 "ublk_stop_disk", 00:05:43.138 "ublk_start_disk", 00:05:43.138 "ublk_destroy_target", 00:05:43.138 "ublk_create_target", 00:05:43.138 "virtio_blk_create_transport", 00:05:43.138 "virtio_blk_get_transports", 00:05:43.138 "vhost_controller_set_coalescing", 00:05:43.138 "vhost_get_controllers", 00:05:43.138 "vhost_delete_controller", 00:05:43.138 "vhost_create_blk_controller", 00:05:43.138 "vhost_scsi_controller_remove_target", 00:05:43.138 "vhost_scsi_controller_add_target", 00:05:43.138 "vhost_start_scsi_controller", 00:05:43.138 "vhost_create_scsi_controller", 00:05:43.138 "thread_set_cpumask", 00:05:43.138 "framework_get_governor", 00:05:43.138 "framework_get_scheduler", 00:05:43.138 "framework_set_scheduler", 00:05:43.138 "framework_get_reactors", 00:05:43.138 "thread_get_io_channels", 00:05:43.138 "thread_get_pollers", 00:05:43.138 "thread_get_stats", 00:05:43.138 "framework_monitor_context_switch", 00:05:43.138 "spdk_kill_instance", 00:05:43.138 "log_enable_timestamps", 00:05:43.138 "log_get_flags", 00:05:43.138 "log_clear_flag", 00:05:43.138 "log_set_flag", 00:05:43.138 "log_get_level", 00:05:43.138 "log_set_level", 00:05:43.138 "log_get_print_level", 00:05:43.138 "log_set_print_level", 00:05:43.138 "framework_enable_cpumask_locks", 00:05:43.139 "framework_disable_cpumask_locks", 00:05:43.139 "framework_wait_init", 00:05:43.139 "framework_start_init", 00:05:43.139 "scsi_get_devices", 00:05:43.139 "bdev_get_histogram", 00:05:43.139 "bdev_enable_histogram", 00:05:43.139 "bdev_set_qos_limit", 00:05:43.139 "bdev_set_qd_sampling_period", 00:05:43.139 "bdev_get_bdevs", 00:05:43.139 "bdev_reset_iostat", 00:05:43.139 "bdev_get_iostat", 00:05:43.139 "bdev_examine", 00:05:43.139 "bdev_wait_for_examine", 00:05:43.139 "bdev_set_options", 00:05:43.139 "notify_get_notifications", 00:05:43.139 "notify_get_types", 00:05:43.139 "accel_get_stats", 00:05:43.139 "accel_set_options", 00:05:43.139 "accel_set_driver", 00:05:43.139 "accel_crypto_key_destroy", 00:05:43.139 "accel_crypto_keys_get", 00:05:43.139 "accel_crypto_key_create", 00:05:43.139 "accel_assign_opc", 00:05:43.139 "accel_get_module_info", 00:05:43.139 "accel_get_opc_assignments", 00:05:43.139 "vmd_rescan", 00:05:43.139 "vmd_remove_device", 00:05:43.139 "vmd_enable", 00:05:43.139 "sock_get_default_impl", 00:05:43.139 "sock_set_default_impl", 00:05:43.139 "sock_impl_set_options", 00:05:43.139 "sock_impl_get_options", 00:05:43.139 "iobuf_get_stats", 00:05:43.139 "iobuf_set_options", 00:05:43.139 "keyring_get_keys", 00:05:43.139 "framework_get_pci_devices", 00:05:43.139 "framework_get_config", 00:05:43.139 "framework_get_subsystems", 00:05:43.139 "vfu_tgt_set_base_path", 00:05:43.139 "trace_get_info", 00:05:43.139 "trace_get_tpoint_group_mask", 00:05:43.139 "trace_disable_tpoint_group", 00:05:43.139 "trace_enable_tpoint_group", 00:05:43.139 "trace_clear_tpoint_mask", 00:05:43.139 "trace_set_tpoint_mask", 00:05:43.139 "spdk_get_version", 00:05:43.139 "rpc_get_methods" 00:05:43.139 ] 00:05:43.139 22:03:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:43.139 22:03:08 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.139 22:03:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.139 22:03:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:43.139 22:03:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2559253 00:05:43.139 22:03:08 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2559253 ']' 00:05:43.139 22:03:08 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2559253 00:05:43.139 22:03:08 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:43.139 22:03:08 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.139 22:03:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2559253 00:05:43.139 22:03:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.139 22:03:08 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.139 22:03:08 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2559253' 00:05:43.139 killing process with pid 2559253 00:05:43.139 22:03:08 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2559253 00:05:43.139 22:03:08 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2559253 00:05:43.401 00:05:43.401 real 0m1.398s 00:05:43.401 user 0m2.570s 00:05:43.401 sys 0m0.423s 00:05:43.401 22:03:08 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.401 22:03:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.401 ************************************ 00:05:43.401 END TEST spdkcli_tcp 00:05:43.401 ************************************ 00:05:43.401 22:03:08 -- common/autotest_common.sh@1142 -- # return 0 00:05:43.401 22:03:08 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.401 22:03:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.401 22:03:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.401 22:03:08 -- common/autotest_common.sh@10 -- # set +x 00:05:43.401 ************************************ 00:05:43.401 START TEST dpdk_mem_utility 00:05:43.401 ************************************ 00:05:43.401 22:03:08 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.662 * Looking for test storage... 00:05:43.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:43.662 22:03:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:43.662 22:03:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2559653 00:05:43.662 22:03:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2559653 00:05:43.662 22:03:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.662 22:03:08 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2559653 ']' 00:05:43.662 22:03:08 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.662 22:03:08 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.662 22:03:08 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.662 22:03:08 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.662 22:03:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:43.662 [2024-07-15 22:03:08.818782] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:05:43.662 [2024-07-15 22:03:08.818853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2559653 ] 00:05:43.662 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.662 [2024-07-15 22:03:08.884737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.662 [2024-07-15 22:03:08.958066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.605 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.605 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:44.605 22:03:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:44.605 22:03:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:44.605 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.605 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.605 { 00:05:44.605 "filename": "/tmp/spdk_mem_dump.txt" 00:05:44.605 } 00:05:44.605 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.605 22:03:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:44.605 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:44.605 1 heaps totaling size 814.000000 MiB 00:05:44.605 size: 814.000000 MiB heap id: 0 00:05:44.605 end heaps---------- 00:05:44.605 8 mempools totaling size 598.116089 MiB 00:05:44.605 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:44.605 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:44.605 size: 84.521057 MiB name: bdev_io_2559653 00:05:44.605 size: 51.011292 MiB name: evtpool_2559653 00:05:44.605 size: 50.003479 MiB name: msgpool_2559653 00:05:44.605 size: 21.763794 MiB name: PDU_Pool 00:05:44.605 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:44.605 size: 0.026123 MiB name: Session_Pool 00:05:44.605 end mempools------- 00:05:44.605 6 memzones totaling size 4.142822 MiB 00:05:44.605 size: 1.000366 MiB name: RG_ring_0_2559653 00:05:44.605 size: 1.000366 MiB name: RG_ring_1_2559653 00:05:44.605 size: 1.000366 MiB name: RG_ring_4_2559653 00:05:44.605 size: 1.000366 MiB name: RG_ring_5_2559653 00:05:44.605 size: 0.125366 MiB name: RG_ring_2_2559653 00:05:44.605 size: 0.015991 MiB name: RG_ring_3_2559653 00:05:44.605 end memzones------- 00:05:44.605 22:03:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:44.605 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:44.605 list of free elements. size: 12.519348 MiB 00:05:44.605 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:44.605 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:44.605 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:44.605 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:44.605 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:44.605 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:44.605 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:44.605 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:44.605 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:44.605 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:44.605 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:44.605 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:44.605 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:44.605 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:44.605 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:44.605 list of standard malloc elements. size: 199.218079 MiB 00:05:44.605 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:44.605 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:44.605 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:44.605 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:44.605 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:44.605 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:44.605 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:44.605 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:44.605 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:44.605 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:44.605 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:44.605 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:44.605 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:44.605 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:44.605 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:44.605 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:44.605 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:44.605 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:44.605 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:44.605 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:44.605 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:44.605 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:44.605 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:44.605 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:44.605 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:44.605 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:44.605 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:44.605 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:44.605 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:44.605 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:44.605 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:44.605 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:44.605 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:44.605 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:44.605 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:44.605 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:44.605 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:44.605 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:44.605 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:44.605 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:44.605 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:44.605 list of memzone associated elements. size: 602.262573 MiB 00:05:44.605 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:44.606 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:44.606 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:44.606 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:44.606 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:44.606 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2559653_0 00:05:44.606 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:44.606 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2559653_0 00:05:44.606 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:44.606 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2559653_0 00:05:44.606 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:44.606 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:44.606 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:44.606 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:44.606 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:44.606 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2559653 00:05:44.606 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:44.606 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2559653 00:05:44.606 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:44.606 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2559653 00:05:44.606 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:44.606 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:44.606 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:44.606 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:44.606 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:44.606 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:44.606 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:44.606 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:44.606 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:44.606 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2559653 00:05:44.606 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:44.606 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2559653 00:05:44.606 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:44.606 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2559653 00:05:44.606 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:44.606 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2559653 00:05:44.606 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:44.606 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2559653 00:05:44.606 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:44.606 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:44.606 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:44.606 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:44.606 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:44.606 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:44.606 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:44.606 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2559653 00:05:44.606 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:44.606 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:44.606 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:44.606 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:44.606 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:44.606 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2559653 00:05:44.606 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:44.606 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:44.606 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:44.606 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2559653 00:05:44.606 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:44.606 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2559653 00:05:44.606 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:44.606 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:44.606 22:03:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:44.606 22:03:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2559653 00:05:44.606 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2559653 ']' 00:05:44.606 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2559653 00:05:44.606 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:44.606 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.606 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2559653 00:05:44.606 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.606 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.606 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2559653' 00:05:44.606 killing process with pid 2559653 00:05:44.606 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2559653 00:05:44.606 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2559653 00:05:44.867 00:05:44.867 real 0m1.306s 00:05:44.867 user 0m1.376s 00:05:44.867 sys 0m0.398s 00:05:44.867 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.867 22:03:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.867 ************************************ 00:05:44.867 END TEST dpdk_mem_utility 00:05:44.867 ************************************ 00:05:44.867 22:03:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.867 22:03:10 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:44.867 22:03:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.867 22:03:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.867 22:03:10 -- common/autotest_common.sh@10 -- # set +x 00:05:44.867 ************************************ 00:05:44.868 START TEST event 00:05:44.868 ************************************ 00:05:44.868 22:03:10 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:44.868 * Looking for test storage... 00:05:44.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:44.868 22:03:10 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:44.868 22:03:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:44.868 22:03:10 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.868 22:03:10 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:44.868 22:03:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.868 22:03:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.868 ************************************ 00:05:44.868 START TEST event_perf 00:05:44.868 ************************************ 00:05:44.868 22:03:10 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.868 Running I/O for 1 seconds...[2024-07-15 22:03:10.185978] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:05:44.868 [2024-07-15 22:03:10.186062] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2560297 ] 00:05:45.128 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.128 [2024-07-15 22:03:10.254045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.128 [2024-07-15 22:03:10.327881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.128 [2024-07-15 22:03:10.328000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.128 [2024-07-15 22:03:10.328171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.128 [2024-07-15 22:03:10.328177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.071 Running I/O for 1 seconds... 00:05:46.071 lcore 0: 173984 00:05:46.071 lcore 1: 173986 00:05:46.071 lcore 2: 173984 00:05:46.071 lcore 3: 173987 00:05:46.071 done. 00:05:46.071 00:05:46.071 real 0m1.217s 00:05:46.071 user 0m4.132s 00:05:46.071 sys 0m0.081s 00:05:46.071 22:03:11 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.071 22:03:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.071 ************************************ 00:05:46.071 END TEST event_perf 00:05:46.071 ************************************ 00:05:46.332 22:03:11 event -- common/autotest_common.sh@1142 -- # return 0 00:05:46.332 22:03:11 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:46.332 22:03:11 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:46.332 22:03:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.332 22:03:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.332 ************************************ 00:05:46.332 START TEST event_reactor 00:05:46.332 ************************************ 00:05:46.332 22:03:11 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:46.332 [2024-07-15 22:03:11.479031] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:05:46.332 [2024-07-15 22:03:11.479152] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2560628 ] 00:05:46.332 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.332 [2024-07-15 22:03:11.542953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.332 [2024-07-15 22:03:11.610837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.713 test_start 00:05:47.713 oneshot 00:05:47.713 tick 100 00:05:47.713 tick 100 00:05:47.713 tick 250 00:05:47.713 tick 100 00:05:47.713 tick 100 00:05:47.713 tick 250 00:05:47.713 tick 100 00:05:47.713 tick 500 00:05:47.713 tick 100 00:05:47.713 tick 100 00:05:47.713 tick 250 00:05:47.713 tick 100 00:05:47.713 tick 100 00:05:47.713 test_end 00:05:47.713 00:05:47.713 real 0m1.206s 00:05:47.713 user 0m1.130s 00:05:47.713 sys 0m0.072s 00:05:47.713 22:03:12 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.713 22:03:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:47.713 ************************************ 00:05:47.713 END TEST event_reactor 00:05:47.713 ************************************ 00:05:47.713 22:03:12 event -- common/autotest_common.sh@1142 -- # return 0 00:05:47.713 22:03:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.713 22:03:12 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:47.713 22:03:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.713 22:03:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.713 ************************************ 00:05:47.713 START TEST event_reactor_perf 00:05:47.713 ************************************ 00:05:47.713 22:03:12 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.713 [2024-07-15 22:03:12.764705] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:05:47.713 [2024-07-15 22:03:12.764817] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2560899 ] 00:05:47.713 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.713 [2024-07-15 22:03:12.826355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.713 [2024-07-15 22:03:12.891189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.652 test_start 00:05:48.652 test_end 00:05:48.652 Performance: 371369 events per second 00:05:48.652 00:05:48.652 real 0m1.200s 00:05:48.652 user 0m1.132s 00:05:48.652 sys 0m0.064s 00:05:48.652 22:03:13 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.652 22:03:13 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.652 ************************************ 00:05:48.652 END TEST event_reactor_perf 00:05:48.652 ************************************ 00:05:48.652 22:03:13 event -- common/autotest_common.sh@1142 -- # return 0 00:05:48.912 22:03:13 event -- event/event.sh@49 -- # uname -s 00:05:48.912 22:03:13 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:48.912 22:03:13 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:48.912 22:03:13 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.912 22:03:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.912 22:03:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.912 ************************************ 00:05:48.912 START TEST event_scheduler 00:05:48.912 ************************************ 00:05:48.912 22:03:14 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:48.912 * Looking for test storage... 00:05:48.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:48.912 22:03:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:48.912 22:03:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2561273 00:05:48.912 22:03:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.912 22:03:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2561273 00:05:48.912 22:03:14 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2561273 ']' 00:05:48.912 22:03:14 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.912 22:03:14 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.912 22:03:14 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.912 22:03:14 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.912 22:03:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.912 22:03:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:48.912 [2024-07-15 22:03:14.173725] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:05:48.912 [2024-07-15 22:03:14.173784] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2561273 ] 00:05:48.912 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.912 [2024-07-15 22:03:14.224959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.172 [2024-07-15 22:03:14.283744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.172 [2024-07-15 22:03:14.283907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.172 [2024-07-15 22:03:14.284063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.172 [2024-07-15 22:03:14.284081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.741 22:03:14 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.741 22:03:14 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:49.741 22:03:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:49.741 22:03:14 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.741 22:03:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.741 [2024-07-15 22:03:14.946273] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:49.741 [2024-07-15 22:03:14.946287] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:49.741 [2024-07-15 22:03:14.946294] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:49.741 [2024-07-15 22:03:14.946298] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:49.741 [2024-07-15 22:03:14.946302] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:49.741 22:03:14 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.741 22:03:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:49.741 22:03:14 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.741 22:03:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.741 [2024-07-15 22:03:15.004941] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:49.741 22:03:15 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.741 22:03:15 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:49.741 22:03:15 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.741 22:03:15 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.741 22:03:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.741 ************************************ 00:05:49.741 START TEST scheduler_create_thread 00:05:49.741 ************************************ 00:05:49.741 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:49.741 22:03:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:49.741 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.741 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.741 2 00:05:49.741 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.741 22:03:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:49.741 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.741 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.000 3 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.000 4 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.000 5 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.000 6 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.000 7 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.000 8 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.000 9 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.000 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.570 10 00:05:50.570 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.570 22:03:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:50.570 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.570 22:03:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.952 22:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.952 22:03:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:51.952 22:03:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:51.952 22:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.952 22:03:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.522 22:03:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.522 22:03:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:52.522 22:03:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.522 22:03:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.467 22:03:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.467 22:03:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:53.467 22:03:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:53.467 22:03:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.467 22:03:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.036 22:03:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.036 00:05:54.036 real 0m4.223s 00:05:54.036 user 0m0.025s 00:05:54.036 sys 0m0.005s 00:05:54.036 22:03:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.036 22:03:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.036 ************************************ 00:05:54.036 END TEST scheduler_create_thread 00:05:54.036 ************************************ 00:05:54.036 22:03:19 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:54.036 22:03:19 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:54.036 22:03:19 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2561273 00:05:54.036 22:03:19 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2561273 ']' 00:05:54.036 22:03:19 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2561273 00:05:54.036 22:03:19 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:54.036 22:03:19 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.036 22:03:19 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2561273 00:05:54.296 22:03:19 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:54.296 22:03:19 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:54.297 22:03:19 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2561273' 00:05:54.297 killing process with pid 2561273 00:05:54.297 22:03:19 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2561273 00:05:54.297 22:03:19 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2561273 00:05:54.297 [2024-07-15 22:03:19.546151] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:54.557 00:05:54.557 real 0m5.696s 00:05:54.557 user 0m12.743s 00:05:54.557 sys 0m0.332s 00:05:54.557 22:03:19 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.557 22:03:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.557 ************************************ 00:05:54.557 END TEST event_scheduler 00:05:54.557 ************************************ 00:05:54.557 22:03:19 event -- common/autotest_common.sh@1142 -- # return 0 00:05:54.557 22:03:19 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:54.557 22:03:19 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:54.557 22:03:19 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.557 22:03:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.557 22:03:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.557 ************************************ 00:05:54.557 START TEST app_repeat 00:05:54.557 ************************************ 00:05:54.557 22:03:19 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:54.557 22:03:19 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.557 22:03:19 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.557 22:03:19 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:54.557 22:03:19 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.557 22:03:19 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:54.557 22:03:19 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:54.557 22:03:19 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:54.557 22:03:19 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2562378 00:05:54.557 22:03:19 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.557 22:03:19 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:54.557 22:03:19 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2562378' 00:05:54.557 Process app_repeat pid: 2562378 00:05:54.557 22:03:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.557 22:03:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:54.557 spdk_app_start Round 0 00:05:54.557 22:03:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2562378 /var/tmp/spdk-nbd.sock 00:05:54.557 22:03:19 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2562378 ']' 00:05:54.557 22:03:19 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.557 22:03:19 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.557 22:03:19 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.557 22:03:19 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.557 22:03:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.557 [2024-07-15 22:03:19.834635] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:05:54.557 [2024-07-15 22:03:19.834700] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2562378 ] 00:05:54.557 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.819 [2024-07-15 22:03:19.895008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.819 [2024-07-15 22:03:19.962894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.819 [2024-07-15 22:03:19.962897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.388 22:03:20 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.388 22:03:20 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:55.388 22:03:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.649 Malloc0 00:05:55.649 22:03:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.649 Malloc1 00:05:55.908 22:03:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.908 22:03:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.909 22:03:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.909 22:03:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.909 22:03:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.909 22:03:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.909 22:03:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.909 22:03:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.909 22:03:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.909 22:03:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.909 22:03:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.909 22:03:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.909 22:03:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.909 22:03:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.909 22:03:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.909 22:03:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.909 /dev/nbd0 00:05:55.909 22:03:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.909 22:03:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.909 22:03:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:55.909 22:03:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:55.909 22:03:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.909 22:03:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.909 22:03:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:55.909 22:03:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:55.909 22:03:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.909 22:03:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.909 22:03:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.909 1+0 records in 00:05:55.909 1+0 records out 00:05:55.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283218 s, 14.5 MB/s 00:05:55.909 22:03:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.909 22:03:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:55.909 22:03:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.909 22:03:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.909 22:03:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:55.909 22:03:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.909 22:03:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.909 22:03:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.169 /dev/nbd1 00:05:56.169 22:03:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.169 22:03:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.169 22:03:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:56.169 22:03:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:56.169 22:03:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:56.169 22:03:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:56.169 22:03:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:56.169 22:03:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:56.169 22:03:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:56.169 22:03:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:56.169 22:03:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.169 1+0 records in 00:05:56.169 1+0 records out 00:05:56.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237565 s, 17.2 MB/s 00:05:56.169 22:03:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.169 22:03:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:56.169 22:03:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.169 22:03:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:56.169 22:03:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:56.169 22:03:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.169 22:03:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.169 22:03:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.169 22:03:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.169 22:03:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.430 { 00:05:56.430 "nbd_device": "/dev/nbd0", 00:05:56.430 "bdev_name": "Malloc0" 00:05:56.430 }, 00:05:56.430 { 00:05:56.430 "nbd_device": "/dev/nbd1", 00:05:56.430 "bdev_name": "Malloc1" 00:05:56.430 } 00:05:56.430 ]' 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.430 { 00:05:56.430 "nbd_device": "/dev/nbd0", 00:05:56.430 "bdev_name": "Malloc0" 00:05:56.430 }, 00:05:56.430 { 00:05:56.430 "nbd_device": "/dev/nbd1", 00:05:56.430 "bdev_name": "Malloc1" 00:05:56.430 } 00:05:56.430 ]' 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.430 /dev/nbd1' 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.430 /dev/nbd1' 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.430 256+0 records in 00:05:56.430 256+0 records out 00:05:56.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115502 s, 90.8 MB/s 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.430 256+0 records in 00:05:56.430 256+0 records out 00:05:56.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159979 s, 65.5 MB/s 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.430 256+0 records in 00:05:56.430 256+0 records out 00:05:56.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174548 s, 60.1 MB/s 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.430 22:03:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.692 22:03:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.952 22:03:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.952 22:03:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.952 22:03:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.952 22:03:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.952 22:03:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.952 22:03:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.952 22:03:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.952 22:03:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.952 22:03:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.952 22:03:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.952 22:03:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.952 22:03:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.952 22:03:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.212 22:03:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.212 [2024-07-15 22:03:22.492891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.505 [2024-07-15 22:03:22.557647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.505 [2024-07-15 22:03:22.557650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.505 [2024-07-15 22:03:22.588954] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.505 [2024-07-15 22:03:22.588986] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.047 22:03:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.047 22:03:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:00.047 spdk_app_start Round 1 00:06:00.047 22:03:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2562378 /var/tmp/spdk-nbd.sock 00:06:00.047 22:03:25 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2562378 ']' 00:06:00.047 22:03:25 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.047 22:03:25 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.047 22:03:25 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.047 22:03:25 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.047 22:03:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.308 22:03:25 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.308 22:03:25 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:00.308 22:03:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.569 Malloc0 00:06:00.569 22:03:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.569 Malloc1 00:06:00.569 22:03:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.569 22:03:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.569 22:03:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.569 22:03:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.569 22:03:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.569 22:03:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.569 22:03:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.569 22:03:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.569 22:03:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.569 22:03:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.569 22:03:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.569 22:03:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.569 22:03:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.569 22:03:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.569 22:03:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.569 22:03:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.829 /dev/nbd0 00:06:00.829 22:03:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.829 22:03:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.829 22:03:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:00.829 22:03:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:00.829 22:03:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:00.829 22:03:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:00.829 22:03:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:00.829 22:03:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:00.829 22:03:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:00.829 22:03:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:00.829 22:03:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.829 1+0 records in 00:06:00.829 1+0 records out 00:06:00.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026018 s, 15.7 MB/s 00:06:00.829 22:03:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.829 22:03:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:00.829 22:03:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.829 22:03:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:00.829 22:03:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:00.829 22:03:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.829 22:03:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.829 22:03:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.090 /dev/nbd1 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.090 22:03:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:01.090 22:03:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:01.090 22:03:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.090 22:03:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.090 22:03:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:01.090 22:03:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:01.090 22:03:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.090 22:03:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.090 22:03:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.090 1+0 records in 00:06:01.090 1+0 records out 00:06:01.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268084 s, 15.3 MB/s 00:06:01.090 22:03:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.090 22:03:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:01.090 22:03:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.090 22:03:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.090 22:03:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.090 { 00:06:01.090 "nbd_device": "/dev/nbd0", 00:06:01.090 "bdev_name": "Malloc0" 00:06:01.090 }, 00:06:01.090 { 00:06:01.090 "nbd_device": "/dev/nbd1", 00:06:01.090 "bdev_name": "Malloc1" 00:06:01.090 } 00:06:01.090 ]' 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.090 { 00:06:01.090 "nbd_device": "/dev/nbd0", 00:06:01.090 "bdev_name": "Malloc0" 00:06:01.090 }, 00:06:01.090 { 00:06:01.090 "nbd_device": "/dev/nbd1", 00:06:01.090 "bdev_name": "Malloc1" 00:06:01.090 } 00:06:01.090 ]' 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.090 /dev/nbd1' 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.090 /dev/nbd1' 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.090 256+0 records in 00:06:01.090 256+0 records out 00:06:01.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012014 s, 87.3 MB/s 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.090 22:03:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.350 256+0 records in 00:06:01.350 256+0 records out 00:06:01.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187952 s, 55.8 MB/s 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.350 256+0 records in 00:06:01.350 256+0 records out 00:06:01.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176761 s, 59.3 MB/s 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.350 22:03:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.351 22:03:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.611 22:03:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.611 22:03:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.611 22:03:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.611 22:03:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.611 22:03:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.611 22:03:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.611 22:03:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.611 22:03:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.611 22:03:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.611 22:03:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.611 22:03:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.872 22:03:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.872 22:03:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.872 22:03:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.872 22:03:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.872 22:03:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.872 22:03:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.872 22:03:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.872 22:03:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.872 22:03:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.872 22:03:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.872 22:03:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.872 22:03:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.872 22:03:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.132 22:03:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.132 [2024-07-15 22:03:27.339072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.132 [2024-07-15 22:03:27.402844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.132 [2024-07-15 22:03:27.402847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.132 [2024-07-15 22:03:27.435062] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.132 [2024-07-15 22:03:27.435097] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.428 22:03:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.428 22:03:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:05.428 spdk_app_start Round 2 00:06:05.428 22:03:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2562378 /var/tmp/spdk-nbd.sock 00:06:05.428 22:03:30 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2562378 ']' 00:06:05.428 22:03:30 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.428 22:03:30 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.428 22:03:30 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.428 22:03:30 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.428 22:03:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.428 22:03:30 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.428 22:03:30 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:05.428 22:03:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.428 Malloc0 00:06:05.428 22:03:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.428 Malloc1 00:06:05.428 22:03:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.428 22:03:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.428 22:03:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.428 22:03:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.428 22:03:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.428 22:03:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.428 22:03:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.428 22:03:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.428 22:03:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.428 22:03:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.428 22:03:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.428 22:03:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.428 22:03:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.428 22:03:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.428 22:03:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.429 22:03:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.688 /dev/nbd0 00:06:05.688 22:03:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.688 22:03:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.688 22:03:30 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:05.688 22:03:30 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:05.688 22:03:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:05.688 22:03:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:05.689 22:03:30 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:05.689 22:03:30 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:05.689 22:03:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:05.689 22:03:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:05.689 22:03:30 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.689 1+0 records in 00:06:05.689 1+0 records out 00:06:05.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236239 s, 17.3 MB/s 00:06:05.689 22:03:30 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.689 22:03:30 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:05.689 22:03:30 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.689 22:03:30 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:05.689 22:03:30 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:05.689 22:03:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.689 22:03:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.689 22:03:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.948 /dev/nbd1 00:06:05.948 22:03:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.948 22:03:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.948 22:03:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:05.948 22:03:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:05.948 22:03:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:05.948 22:03:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:05.948 22:03:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:05.948 22:03:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:05.948 22:03:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:05.948 22:03:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:05.948 22:03:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.948 1+0 records in 00:06:05.948 1+0 records out 00:06:05.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277065 s, 14.8 MB/s 00:06:05.948 22:03:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.948 22:03:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:05.948 22:03:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.948 22:03:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:05.948 22:03:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:05.948 22:03:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.948 22:03:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.948 22:03:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.948 22:03:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.948 22:03:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.948 22:03:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.948 { 00:06:05.948 "nbd_device": "/dev/nbd0", 00:06:05.948 "bdev_name": "Malloc0" 00:06:05.948 }, 00:06:05.948 { 00:06:05.948 "nbd_device": "/dev/nbd1", 00:06:05.948 "bdev_name": "Malloc1" 00:06:05.948 } 00:06:05.948 ]' 00:06:05.948 22:03:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.948 { 00:06:05.948 "nbd_device": "/dev/nbd0", 00:06:05.948 "bdev_name": "Malloc0" 00:06:05.948 }, 00:06:05.948 { 00:06:05.948 "nbd_device": "/dev/nbd1", 00:06:05.948 "bdev_name": "Malloc1" 00:06:05.948 } 00:06:05.948 ]' 00:06:05.948 22:03:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.208 /dev/nbd1' 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.208 /dev/nbd1' 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.208 256+0 records in 00:06:06.208 256+0 records out 00:06:06.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124551 s, 84.2 MB/s 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.208 256+0 records in 00:06:06.208 256+0 records out 00:06:06.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157979 s, 66.4 MB/s 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.208 256+0 records in 00:06:06.208 256+0 records out 00:06:06.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173588 s, 60.4 MB/s 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.208 22:03:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.209 22:03:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.209 22:03:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.209 22:03:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.209 22:03:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.209 22:03:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.209 22:03:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.209 22:03:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.209 22:03:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.209 22:03:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.209 22:03:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.209 22:03:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.209 22:03:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.209 22:03:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.209 22:03:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.209 22:03:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.469 22:03:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.729 22:03:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.729 22:03:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.729 22:03:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.729 22:03:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.729 22:03:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.729 22:03:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.729 22:03:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.729 22:03:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.729 22:03:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.729 22:03:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.729 22:03:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.729 22:03:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.729 22:03:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.988 22:03:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.988 [2024-07-15 22:03:32.222043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.988 [2024-07-15 22:03:32.285842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.988 [2024-07-15 22:03:32.285846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.247 [2024-07-15 22:03:32.317250] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.247 [2024-07-15 22:03:32.317280] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.784 22:03:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2562378 /var/tmp/spdk-nbd.sock 00:06:09.784 22:03:35 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2562378 ']' 00:06:09.784 22:03:35 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.784 22:03:35 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.785 22:03:35 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.785 22:03:35 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.785 22:03:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.045 22:03:35 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.045 22:03:35 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:10.045 22:03:35 event.app_repeat -- event/event.sh@39 -- # killprocess 2562378 00:06:10.045 22:03:35 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2562378 ']' 00:06:10.045 22:03:35 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2562378 00:06:10.045 22:03:35 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:10.045 22:03:35 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.045 22:03:35 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2562378 00:06:10.045 22:03:35 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.045 22:03:35 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.045 22:03:35 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2562378' 00:06:10.045 killing process with pid 2562378 00:06:10.045 22:03:35 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2562378 00:06:10.045 22:03:35 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2562378 00:06:10.307 spdk_app_start is called in Round 0. 00:06:10.307 Shutdown signal received, stop current app iteration 00:06:10.307 Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 reinitialization... 00:06:10.307 spdk_app_start is called in Round 1. 00:06:10.307 Shutdown signal received, stop current app iteration 00:06:10.307 Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 reinitialization... 00:06:10.307 spdk_app_start is called in Round 2. 00:06:10.307 Shutdown signal received, stop current app iteration 00:06:10.307 Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 reinitialization... 00:06:10.307 spdk_app_start is called in Round 3. 00:06:10.307 Shutdown signal received, stop current app iteration 00:06:10.307 22:03:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:10.307 22:03:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:10.307 00:06:10.307 real 0m15.619s 00:06:10.307 user 0m33.659s 00:06:10.307 sys 0m2.113s 00:06:10.307 22:03:35 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.307 22:03:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.307 ************************************ 00:06:10.307 END TEST app_repeat 00:06:10.307 ************************************ 00:06:10.307 22:03:35 event -- common/autotest_common.sh@1142 -- # return 0 00:06:10.307 22:03:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:10.307 22:03:35 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:10.307 22:03:35 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.307 22:03:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.307 22:03:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.307 ************************************ 00:06:10.307 START TEST cpu_locks 00:06:10.307 ************************************ 00:06:10.307 22:03:35 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:10.307 * Looking for test storage... 00:06:10.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:10.307 22:03:35 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:10.307 22:03:35 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:10.307 22:03:35 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:10.307 22:03:35 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:10.307 22:03:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.307 22:03:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.307 22:03:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.570 ************************************ 00:06:10.570 START TEST default_locks 00:06:10.570 ************************************ 00:06:10.570 22:03:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:10.570 22:03:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2565916 00:06:10.570 22:03:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2565916 00:06:10.570 22:03:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.570 22:03:35 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2565916 ']' 00:06:10.570 22:03:35 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.571 22:03:35 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.571 22:03:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.571 22:03:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.571 22:03:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.571 [2024-07-15 22:03:35.698305] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:10.571 [2024-07-15 22:03:35.698372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2565916 ] 00:06:10.571 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.571 [2024-07-15 22:03:35.760474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.571 [2024-07-15 22:03:35.825301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.511 22:03:36 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.511 22:03:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:11.511 22:03:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2565916 00:06:11.511 22:03:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2565916 00:06:11.511 22:03:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.511 lslocks: write error 00:06:11.511 22:03:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2565916 00:06:11.511 22:03:36 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2565916 ']' 00:06:11.511 22:03:36 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2565916 00:06:11.511 22:03:36 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:11.511 22:03:36 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.511 22:03:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2565916 00:06:11.772 22:03:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.772 22:03:36 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.772 22:03:36 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2565916' 00:06:11.772 killing process with pid 2565916 00:06:11.772 22:03:36 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2565916 00:06:11.772 22:03:36 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2565916 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2565916 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2565916 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2565916 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2565916 ']' 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2565916) - No such process 00:06:11.772 ERROR: process (pid: 2565916) is no longer running 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.772 00:06:11.772 real 0m1.459s 00:06:11.772 user 0m1.566s 00:06:11.772 sys 0m0.480s 00:06:11.772 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.033 22:03:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.033 ************************************ 00:06:12.033 END TEST default_locks 00:06:12.033 ************************************ 00:06:12.033 22:03:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:12.033 22:03:37 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:12.033 22:03:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.033 22:03:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.033 22:03:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.033 ************************************ 00:06:12.033 START TEST default_locks_via_rpc 00:06:12.033 ************************************ 00:06:12.033 22:03:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:12.033 22:03:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2566206 00:06:12.033 22:03:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2566206 00:06:12.033 22:03:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.033 22:03:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2566206 ']' 00:06:12.033 22:03:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.033 22:03:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.033 22:03:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.033 22:03:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.033 22:03:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.033 [2024-07-15 22:03:37.225732] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:12.033 [2024-07-15 22:03:37.225786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2566206 ] 00:06:12.033 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.033 [2024-07-15 22:03:37.288748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.292 [2024-07-15 22:03:37.358016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.861 22:03:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.861 22:03:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:12.861 22:03:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:12.861 22:03:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.861 22:03:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.861 22:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.861 22:03:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:12.861 22:03:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:12.861 22:03:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:12.861 22:03:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:12.861 22:03:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.861 22:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.861 22:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.861 22:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.861 22:03:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2566206 00:06:12.861 22:03:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2566206 00:06:12.861 22:03:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.120 22:03:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2566206 00:06:13.121 22:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2566206 ']' 00:06:13.121 22:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2566206 00:06:13.121 22:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:13.121 22:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.121 22:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2566206 00:06:13.380 22:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.380 22:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.380 22:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2566206' 00:06:13.380 killing process with pid 2566206 00:06:13.380 22:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2566206 00:06:13.380 22:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2566206 00:06:13.380 00:06:13.380 real 0m1.498s 00:06:13.380 user 0m1.610s 00:06:13.380 sys 0m0.497s 00:06:13.380 22:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.380 22:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.380 ************************************ 00:06:13.380 END TEST default_locks_via_rpc 00:06:13.380 ************************************ 00:06:13.380 22:03:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:13.640 22:03:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:13.640 22:03:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.640 22:03:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.640 22:03:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.640 ************************************ 00:06:13.640 START TEST non_locking_app_on_locked_coremask 00:06:13.640 ************************************ 00:06:13.640 22:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:13.640 22:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2566516 00:06:13.640 22:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2566516 /var/tmp/spdk.sock 00:06:13.640 22:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.640 22:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2566516 ']' 00:06:13.640 22:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.640 22:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.640 22:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.640 22:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.640 22:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.640 [2024-07-15 22:03:38.798507] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:13.640 [2024-07-15 22:03:38.798559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2566516 ] 00:06:13.640 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.640 [2024-07-15 22:03:38.859196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.640 [2024-07-15 22:03:38.929662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.581 22:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.581 22:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:14.581 22:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2566657 00:06:14.581 22:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2566657 /var/tmp/spdk2.sock 00:06:14.581 22:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:14.581 22:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2566657 ']' 00:06:14.581 22:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.581 22:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.581 22:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.581 22:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.581 22:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.581 [2024-07-15 22:03:39.622464] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:14.581 [2024-07-15 22:03:39.622517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2566657 ] 00:06:14.581 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.581 [2024-07-15 22:03:39.710289] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.581 [2024-07-15 22:03:39.710314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.581 [2024-07-15 22:03:39.839520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.183 22:03:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.183 22:03:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:15.183 22:03:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2566516 00:06:15.183 22:03:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2566516 00:06:15.183 22:03:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.756 lslocks: write error 00:06:15.756 22:03:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2566516 00:06:15.756 22:03:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2566516 ']' 00:06:15.756 22:03:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2566516 00:06:15.756 22:03:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:15.756 22:03:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.756 22:03:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2566516 00:06:15.756 22:03:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.756 22:03:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.756 22:03:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2566516' 00:06:15.756 killing process with pid 2566516 00:06:15.756 22:03:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2566516 00:06:15.756 22:03:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2566516 00:06:16.018 22:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2566657 00:06:16.018 22:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2566657 ']' 00:06:16.018 22:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2566657 00:06:16.278 22:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:16.278 22:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.278 22:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2566657 00:06:16.278 22:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.278 22:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.278 22:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2566657' 00:06:16.278 killing process with pid 2566657 00:06:16.278 22:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2566657 00:06:16.278 22:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2566657 00:06:16.538 00:06:16.538 real 0m2.862s 00:06:16.538 user 0m3.100s 00:06:16.538 sys 0m0.882s 00:06:16.538 22:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.538 22:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.538 ************************************ 00:06:16.539 END TEST non_locking_app_on_locked_coremask 00:06:16.539 ************************************ 00:06:16.539 22:03:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:16.539 22:03:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:16.539 22:03:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.539 22:03:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.539 22:03:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.539 ************************************ 00:06:16.539 START TEST locking_app_on_unlocked_coremask 00:06:16.539 ************************************ 00:06:16.539 22:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:16.539 22:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2567036 00:06:16.539 22:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2567036 /var/tmp/spdk.sock 00:06:16.539 22:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:16.539 22:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2567036 ']' 00:06:16.539 22:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.539 22:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.539 22:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.539 22:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.539 22:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.539 [2024-07-15 22:03:41.739722] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:16.539 [2024-07-15 22:03:41.739808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2567036 ] 00:06:16.539 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.539 [2024-07-15 22:03:41.803932] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.539 [2024-07-15 22:03:41.803964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.799 [2024-07-15 22:03:41.877712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.368 22:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.368 22:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:17.368 22:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2567365 00:06:17.368 22:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2567365 /var/tmp/spdk2.sock 00:06:17.368 22:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2567365 ']' 00:06:17.368 22:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:17.368 22:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.368 22:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.368 22:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.368 22:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.368 22:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.368 [2024-07-15 22:03:42.547634] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:17.368 [2024-07-15 22:03:42.547684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2567365 ] 00:06:17.368 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.368 [2024-07-15 22:03:42.636096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.628 [2024-07-15 22:03:42.765212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.198 22:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.198 22:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:18.198 22:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2567365 00:06:18.198 22:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2567365 00:06:18.198 22:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.458 lslocks: write error 00:06:18.458 22:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2567036 00:06:18.458 22:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2567036 ']' 00:06:18.458 22:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2567036 00:06:18.458 22:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:18.458 22:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.458 22:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2567036 00:06:18.458 22:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.458 22:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.458 22:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2567036' 00:06:18.458 killing process with pid 2567036 00:06:18.458 22:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2567036 00:06:18.458 22:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2567036 00:06:18.718 22:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2567365 00:06:18.718 22:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2567365 ']' 00:06:18.718 22:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2567365 00:06:18.718 22:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:18.718 22:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.718 22:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2567365 00:06:18.978 22:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.978 22:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.978 22:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2567365' 00:06:18.978 killing process with pid 2567365 00:06:18.978 22:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2567365 00:06:18.978 22:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2567365 00:06:18.978 00:06:18.978 real 0m2.605s 00:06:18.978 user 0m2.840s 00:06:18.978 sys 0m0.759s 00:06:18.978 22:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.978 22:03:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.978 ************************************ 00:06:18.978 END TEST locking_app_on_unlocked_coremask 00:06:18.978 ************************************ 00:06:19.238 22:03:44 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:19.238 22:03:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:19.238 22:03:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.238 22:03:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.238 22:03:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.238 ************************************ 00:06:19.238 START TEST locking_app_on_locked_coremask 00:06:19.238 ************************************ 00:06:19.238 22:03:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:19.238 22:03:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2567742 00:06:19.238 22:03:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2567742 /var/tmp/spdk.sock 00:06:19.238 22:03:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.238 22:03:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2567742 ']' 00:06:19.238 22:03:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.238 22:03:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.238 22:03:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.238 22:03:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.238 22:03:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.238 [2024-07-15 22:03:44.415470] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:19.238 [2024-07-15 22:03:44.415520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2567742 ] 00:06:19.238 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.238 [2024-07-15 22:03:44.474426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.238 [2024-07-15 22:03:44.540500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2567772 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2567772 /var/tmp/spdk2.sock 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2567772 /var/tmp/spdk2.sock 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2567772 /var/tmp/spdk2.sock 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2567772 ']' 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.178 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.178 [2024-07-15 22:03:45.222249] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:20.178 [2024-07-15 22:03:45.222300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2567772 ] 00:06:20.178 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.178 [2024-07-15 22:03:45.308954] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2567742 has claimed it. 00:06:20.178 [2024-07-15 22:03:45.308992] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2567772) - No such process 00:06:20.747 ERROR: process (pid: 2567772) is no longer running 00:06:20.747 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.747 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:20.747 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:20.747 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.747 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:20.747 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.747 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2567742 00:06:20.747 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2567742 00:06:20.747 22:03:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.007 lslocks: write error 00:06:21.007 22:03:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2567742 00:06:21.007 22:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2567742 ']' 00:06:21.007 22:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2567742 00:06:21.007 22:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:21.007 22:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.007 22:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2567742 00:06:21.267 22:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.267 22:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.267 22:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2567742' 00:06:21.267 killing process with pid 2567742 00:06:21.267 22:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2567742 00:06:21.267 22:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2567742 00:06:21.267 00:06:21.267 real 0m2.203s 00:06:21.267 user 0m2.445s 00:06:21.267 sys 0m0.606s 00:06:21.267 22:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.267 22:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.267 ************************************ 00:06:21.267 END TEST locking_app_on_locked_coremask 00:06:21.267 ************************************ 00:06:21.527 22:03:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:21.527 22:03:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:21.527 22:03:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.527 22:03:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.527 22:03:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.527 ************************************ 00:06:21.527 START TEST locking_overlapped_coremask 00:06:21.527 ************************************ 00:06:21.527 22:03:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:21.527 22:03:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2568116 00:06:21.527 22:03:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2568116 /var/tmp/spdk.sock 00:06:21.527 22:03:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:21.527 22:03:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2568116 ']' 00:06:21.527 22:03:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.527 22:03:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.527 22:03:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.527 22:03:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.527 22:03:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.527 [2024-07-15 22:03:46.688471] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:21.527 [2024-07-15 22:03:46.688520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2568116 ] 00:06:21.527 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.527 [2024-07-15 22:03:46.747590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.527 [2024-07-15 22:03:46.812292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.527 [2024-07-15 22:03:46.812406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.527 [2024-07-15 22:03:46.812408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2568437 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2568437 /var/tmp/spdk2.sock 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2568437 /var/tmp/spdk2.sock 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2568437 /var/tmp/spdk2.sock 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2568437 ']' 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.466 22:03:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.466 [2024-07-15 22:03:47.515726] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:22.466 [2024-07-15 22:03:47.515779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2568437 ] 00:06:22.466 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.466 [2024-07-15 22:03:47.585788] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2568116 has claimed it. 00:06:22.466 [2024-07-15 22:03:47.585822] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:23.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2568437) - No such process 00:06:23.057 ERROR: process (pid: 2568437) is no longer running 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2568116 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2568116 ']' 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2568116 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2568116 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2568116' 00:06:23.057 killing process with pid 2568116 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2568116 00:06:23.057 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2568116 00:06:23.318 00:06:23.318 real 0m1.750s 00:06:23.318 user 0m4.984s 00:06:23.318 sys 0m0.359s 00:06:23.318 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.318 22:03:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.318 ************************************ 00:06:23.318 END TEST locking_overlapped_coremask 00:06:23.318 ************************************ 00:06:23.318 22:03:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:23.318 22:03:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:23.318 22:03:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.318 22:03:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.318 22:03:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.318 ************************************ 00:06:23.318 START TEST locking_overlapped_coremask_via_rpc 00:06:23.318 ************************************ 00:06:23.318 22:03:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:23.318 22:03:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2568494 00:06:23.318 22:03:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2568494 /var/tmp/spdk.sock 00:06:23.318 22:03:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:23.318 22:03:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2568494 ']' 00:06:23.318 22:03:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.318 22:03:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.318 22:03:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.318 22:03:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.318 22:03:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.318 [2024-07-15 22:03:48.517921] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:23.318 [2024-07-15 22:03:48.517974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2568494 ] 00:06:23.318 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.318 [2024-07-15 22:03:48.579407] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.318 [2024-07-15 22:03:48.579439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.578 [2024-07-15 22:03:48.654245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.578 [2024-07-15 22:03:48.654521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.578 [2024-07-15 22:03:48.654524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.149 22:03:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.149 22:03:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:24.149 22:03:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2568823 00:06:24.149 22:03:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2568823 /var/tmp/spdk2.sock 00:06:24.149 22:03:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2568823 ']' 00:06:24.149 22:03:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:24.149 22:03:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.149 22:03:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.150 22:03:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.150 22:03:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.150 22:03:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.150 [2024-07-15 22:03:49.330304] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:24.150 [2024-07-15 22:03:49.330359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2568823 ] 00:06:24.150 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.150 [2024-07-15 22:03:49.402469] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.150 [2024-07-15 22:03:49.402491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.410 [2024-07-15 22:03:49.512353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.410 [2024-07-15 22:03:49.512508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.410 [2024-07-15 22:03:49.512511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.980 [2024-07-15 22:03:50.108192] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2568494 has claimed it. 00:06:24.980 request: 00:06:24.980 { 00:06:24.980 "method": "framework_enable_cpumask_locks", 00:06:24.980 "req_id": 1 00:06:24.980 } 00:06:24.980 Got JSON-RPC error response 00:06:24.980 response: 00:06:24.980 { 00:06:24.980 "code": -32603, 00:06:24.980 "message": "Failed to claim CPU core: 2" 00:06:24.980 } 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2568494 /var/tmp/spdk.sock 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2568494 ']' 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2568823 /var/tmp/spdk2.sock 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2568823 ']' 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.980 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.241 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.241 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:25.241 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:25.241 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:25.241 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:25.241 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:25.241 00:06:25.241 real 0m1.996s 00:06:25.241 user 0m0.751s 00:06:25.241 sys 0m0.170s 00:06:25.241 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.241 22:03:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.241 ************************************ 00:06:25.241 END TEST locking_overlapped_coremask_via_rpc 00:06:25.241 ************************************ 00:06:25.241 22:03:50 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:25.241 22:03:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:25.241 22:03:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2568494 ]] 00:06:25.241 22:03:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2568494 00:06:25.241 22:03:50 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2568494 ']' 00:06:25.241 22:03:50 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2568494 00:06:25.241 22:03:50 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:25.241 22:03:50 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.241 22:03:50 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2568494 00:06:25.241 22:03:50 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.241 22:03:50 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.241 22:03:50 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2568494' 00:06:25.241 killing process with pid 2568494 00:06:25.241 22:03:50 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2568494 00:06:25.241 22:03:50 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2568494 00:06:25.501 22:03:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2568823 ]] 00:06:25.501 22:03:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2568823 00:06:25.501 22:03:50 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2568823 ']' 00:06:25.501 22:03:50 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2568823 00:06:25.501 22:03:50 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:25.501 22:03:50 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.501 22:03:50 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2568823 00:06:25.501 22:03:50 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:25.501 22:03:50 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:25.501 22:03:50 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2568823' 00:06:25.501 killing process with pid 2568823 00:06:25.501 22:03:50 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2568823 00:06:25.501 22:03:50 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2568823 00:06:25.761 22:03:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:25.761 22:03:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:25.761 22:03:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2568494 ]] 00:06:25.761 22:03:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2568494 00:06:25.761 22:03:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2568494 ']' 00:06:25.761 22:03:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2568494 00:06:25.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2568494) - No such process 00:06:25.761 22:03:51 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2568494 is not found' 00:06:25.761 Process with pid 2568494 is not found 00:06:25.761 22:03:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2568823 ]] 00:06:25.761 22:03:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2568823 00:06:25.761 22:03:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2568823 ']' 00:06:25.761 22:03:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2568823 00:06:25.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2568823) - No such process 00:06:25.761 22:03:51 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2568823 is not found' 00:06:25.761 Process with pid 2568823 is not found 00:06:25.761 22:03:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:25.761 00:06:25.761 real 0m15.524s 00:06:25.761 user 0m26.823s 00:06:25.761 sys 0m4.631s 00:06:25.761 22:03:51 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.761 22:03:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.761 ************************************ 00:06:25.761 END TEST cpu_locks 00:06:25.761 ************************************ 00:06:25.761 22:03:51 event -- common/autotest_common.sh@1142 -- # return 0 00:06:25.761 00:06:25.761 real 0m41.013s 00:06:25.761 user 1m19.813s 00:06:25.761 sys 0m7.673s 00:06:25.761 22:03:51 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.761 22:03:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.761 ************************************ 00:06:25.761 END TEST event 00:06:25.761 ************************************ 00:06:26.021 22:03:51 -- common/autotest_common.sh@1142 -- # return 0 00:06:26.021 22:03:51 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:26.021 22:03:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.021 22:03:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.021 22:03:51 -- common/autotest_common.sh@10 -- # set +x 00:06:26.021 ************************************ 00:06:26.021 START TEST thread 00:06:26.021 ************************************ 00:06:26.021 22:03:51 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:26.021 * Looking for test storage... 00:06:26.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:26.021 22:03:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.021 22:03:51 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:26.021 22:03:51 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.021 22:03:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.021 ************************************ 00:06:26.021 START TEST thread_poller_perf 00:06:26.021 ************************************ 00:06:26.021 22:03:51 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.021 [2024-07-15 22:03:51.285721] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:26.021 [2024-07-15 22:03:51.285835] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569265 ] 00:06:26.021 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.281 [2024-07-15 22:03:51.353897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.281 [2024-07-15 22:03:51.427234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.281 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:27.219 ====================================== 00:06:27.219 busy:2406567104 (cyc) 00:06:27.219 total_run_count: 287000 00:06:27.219 tsc_hz: 2400000000 (cyc) 00:06:27.219 ====================================== 00:06:27.219 poller_cost: 8385 (cyc), 3493 (nsec) 00:06:27.219 00:06:27.219 real 0m1.225s 00:06:27.219 user 0m1.137s 00:06:27.219 sys 0m0.084s 00:06:27.219 22:03:52 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.219 22:03:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.219 ************************************ 00:06:27.219 END TEST thread_poller_perf 00:06:27.219 ************************************ 00:06:27.219 22:03:52 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:27.219 22:03:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:27.219 22:03:52 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:27.219 22:03:52 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.219 22:03:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.479 ************************************ 00:06:27.479 START TEST thread_poller_perf 00:06:27.479 ************************************ 00:06:27.479 22:03:52 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:27.479 [2024-07-15 22:03:52.585513] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:27.479 [2024-07-15 22:03:52.585607] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569614 ] 00:06:27.479 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.479 [2024-07-15 22:03:52.648705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.479 [2024-07-15 22:03:52.713025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.479 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:28.860 ====================================== 00:06:28.860 busy:2402307046 (cyc) 00:06:28.860 total_run_count: 3811000 00:06:28.860 tsc_hz: 2400000000 (cyc) 00:06:28.860 ====================================== 00:06:28.860 poller_cost: 630 (cyc), 262 (nsec) 00:06:28.860 00:06:28.860 real 0m1.205s 00:06:28.860 user 0m1.125s 00:06:28.860 sys 0m0.076s 00:06:28.860 22:03:53 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.860 22:03:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:28.860 ************************************ 00:06:28.860 END TEST thread_poller_perf 00:06:28.860 ************************************ 00:06:28.860 22:03:53 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:28.860 22:03:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:28.860 00:06:28.860 real 0m2.673s 00:06:28.860 user 0m2.347s 00:06:28.860 sys 0m0.334s 00:06:28.860 22:03:53 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.861 22:03:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.861 ************************************ 00:06:28.861 END TEST thread 00:06:28.861 ************************************ 00:06:28.861 22:03:53 -- common/autotest_common.sh@1142 -- # return 0 00:06:28.861 22:03:53 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:28.861 22:03:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.861 22:03:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.861 22:03:53 -- common/autotest_common.sh@10 -- # set +x 00:06:28.861 ************************************ 00:06:28.861 START TEST accel 00:06:28.861 ************************************ 00:06:28.861 22:03:53 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:28.861 * Looking for test storage... 00:06:28.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:28.861 22:03:53 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:28.861 22:03:53 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:28.861 22:03:53 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:28.861 22:03:53 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2569896 00:06:28.861 22:03:53 accel -- accel/accel.sh@63 -- # waitforlisten 2569896 00:06:28.861 22:03:53 accel -- common/autotest_common.sh@829 -- # '[' -z 2569896 ']' 00:06:28.861 22:03:53 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.861 22:03:53 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.861 22:03:53 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.861 22:03:53 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:28.861 22:03:53 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.861 22:03:53 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:28.861 22:03:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.861 22:03:53 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.861 22:03:53 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.861 22:03:53 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.861 22:03:53 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.861 22:03:53 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.861 22:03:53 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:28.861 22:03:53 accel -- accel/accel.sh@41 -- # jq -r . 00:06:28.861 [2024-07-15 22:03:54.045933] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:28.861 [2024-07-15 22:03:54.046004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569896 ] 00:06:28.861 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.861 [2024-07-15 22:03:54.111948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.121 [2024-07-15 22:03:54.187311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.692 22:03:54 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.692 22:03:54 accel -- common/autotest_common.sh@862 -- # return 0 00:06:29.692 22:03:54 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:29.692 22:03:54 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:29.692 22:03:54 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:29.692 22:03:54 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:29.692 22:03:54 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:29.692 22:03:54 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:29.692 22:03:54 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:29.692 22:03:54 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.692 22:03:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.692 22:03:54 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.692 22:03:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.692 22:03:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.692 22:03:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.692 22:03:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.692 22:03:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.692 22:03:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.692 22:03:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.692 22:03:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.692 22:03:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.692 22:03:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.692 22:03:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.692 22:03:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.692 22:03:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.692 22:03:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.692 22:03:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.692 22:03:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.692 22:03:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.692 22:03:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.692 22:03:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.692 22:03:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.692 22:03:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.692 22:03:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.692 22:03:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.692 22:03:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.692 22:03:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.692 22:03:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.692 22:03:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.692 22:03:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.692 22:03:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.692 22:03:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.692 22:03:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.692 22:03:54 accel -- accel/accel.sh@75 -- # killprocess 2569896 00:06:29.692 22:03:54 accel -- common/autotest_common.sh@948 -- # '[' -z 2569896 ']' 00:06:29.692 22:03:54 accel -- common/autotest_common.sh@952 -- # kill -0 2569896 00:06:29.692 22:03:54 accel -- common/autotest_common.sh@953 -- # uname 00:06:29.692 22:03:54 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.692 22:03:54 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2569896 00:06:29.692 22:03:54 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.692 22:03:54 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.692 22:03:54 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2569896' 00:06:29.692 killing process with pid 2569896 00:06:29.692 22:03:54 accel -- common/autotest_common.sh@967 -- # kill 2569896 00:06:29.692 22:03:54 accel -- common/autotest_common.sh@972 -- # wait 2569896 00:06:29.953 22:03:55 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:29.953 22:03:55 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:29.953 22:03:55 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:29.953 22:03:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.953 22:03:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.953 22:03:55 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:29.953 22:03:55 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:29.953 22:03:55 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:29.953 22:03:55 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.953 22:03:55 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.953 22:03:55 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.953 22:03:55 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.953 22:03:55 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.953 22:03:55 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:29.953 22:03:55 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:29.953 22:03:55 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.953 22:03:55 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:29.953 22:03:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.953 22:03:55 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:29.953 22:03:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:29.953 22:03:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.953 22:03:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.267 ************************************ 00:06:30.267 START TEST accel_missing_filename 00:06:30.267 ************************************ 00:06:30.267 22:03:55 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:30.267 22:03:55 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:30.267 22:03:55 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:30.267 22:03:55 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.267 22:03:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.267 22:03:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.267 22:03:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.267 22:03:55 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:30.267 22:03:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:30.267 22:03:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:30.267 22:03:55 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.267 22:03:55 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.267 22:03:55 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.267 22:03:55 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.267 22:03:55 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.267 22:03:55 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:30.267 22:03:55 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:30.267 [2024-07-15 22:03:55.307365] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:30.267 [2024-07-15 22:03:55.307416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2570080 ] 00:06:30.267 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.267 [2024-07-15 22:03:55.367174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.267 [2024-07-15 22:03:55.433229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.267 [2024-07-15 22:03:55.465226] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.267 [2024-07-15 22:03:55.502183] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:30.538 A filename is required. 00:06:30.538 22:03:55 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:30.538 22:03:55 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.538 22:03:55 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:30.538 22:03:55 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:30.538 22:03:55 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:30.538 22:03:55 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.538 00:06:30.538 real 0m0.265s 00:06:30.538 user 0m0.208s 00:06:30.538 sys 0m0.100s 00:06:30.538 22:03:55 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.538 22:03:55 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:30.538 ************************************ 00:06:30.538 END TEST accel_missing_filename 00:06:30.538 ************************************ 00:06:30.538 22:03:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.538 22:03:55 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.538 22:03:55 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:30.538 22:03:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.538 22:03:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.538 ************************************ 00:06:30.538 START TEST accel_compress_verify 00:06:30.538 ************************************ 00:06:30.538 22:03:55 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.538 22:03:55 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:30.538 22:03:55 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.538 22:03:55 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.538 22:03:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.538 22:03:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.538 22:03:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.538 22:03:55 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.538 22:03:55 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:30.538 22:03:55 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.538 22:03:55 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.538 22:03:55 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.538 22:03:55 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.538 22:03:55 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.538 22:03:55 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.538 22:03:55 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:30.538 22:03:55 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:30.538 [2024-07-15 22:03:55.658050] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:30.538 [2024-07-15 22:03:55.658113] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2570254 ] 00:06:30.538 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.538 [2024-07-15 22:03:55.719043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.538 [2024-07-15 22:03:55.784303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.538 [2024-07-15 22:03:55.816052] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.538 [2024-07-15 22:03:55.852995] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:30.799 00:06:30.799 Compression does not support the verify option, aborting. 00:06:30.799 22:03:55 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:30.799 22:03:55 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.799 22:03:55 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:30.799 22:03:55 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:30.799 22:03:55 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:30.799 22:03:55 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.799 00:06:30.799 real 0m0.278s 00:06:30.799 user 0m0.193s 00:06:30.799 sys 0m0.100s 00:06:30.799 22:03:55 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.799 22:03:55 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:30.799 ************************************ 00:06:30.799 END TEST accel_compress_verify 00:06:30.799 ************************************ 00:06:30.799 22:03:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.799 22:03:55 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:30.799 22:03:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:30.799 22:03:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.799 22:03:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.799 ************************************ 00:06:30.799 START TEST accel_wrong_workload 00:06:30.799 ************************************ 00:06:30.799 22:03:55 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:30.799 22:03:55 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:30.799 22:03:55 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:30.799 22:03:55 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.799 22:03:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.799 22:03:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.799 22:03:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.800 22:03:55 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:30.800 22:03:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:30.800 22:03:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:30.800 22:03:55 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.800 22:03:55 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.800 22:03:55 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.800 22:03:55 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.800 22:03:55 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.800 22:03:55 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:30.800 22:03:55 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:30.800 Unsupported workload type: foobar 00:06:30.800 [2024-07-15 22:03:56.012847] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:30.800 accel_perf options: 00:06:30.800 [-h help message] 00:06:30.800 [-q queue depth per core] 00:06:30.800 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:30.800 [-T number of threads per core 00:06:30.800 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:30.800 [-t time in seconds] 00:06:30.800 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:30.800 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:30.800 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:30.800 [-l for compress/decompress workloads, name of uncompressed input file 00:06:30.800 [-S for crc32c workload, use this seed value (default 0) 00:06:30.800 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:30.800 [-f for fill workload, use this BYTE value (default 255) 00:06:30.800 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:30.800 [-y verify result if this switch is on] 00:06:30.800 [-a tasks to allocate per core (default: same value as -q)] 00:06:30.800 Can be used to spread operations across a wider range of memory. 00:06:30.800 22:03:56 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:30.800 22:03:56 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.800 22:03:56 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.800 22:03:56 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.800 00:06:30.800 real 0m0.036s 00:06:30.800 user 0m0.016s 00:06:30.800 sys 0m0.020s 00:06:30.800 22:03:56 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.800 22:03:56 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:30.800 ************************************ 00:06:30.800 END TEST accel_wrong_workload 00:06:30.800 ************************************ 00:06:30.800 Error: writing output failed: Broken pipe 00:06:30.800 22:03:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.800 22:03:56 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:30.800 22:03:56 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:30.800 22:03:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.800 22:03:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.800 ************************************ 00:06:30.800 START TEST accel_negative_buffers 00:06:30.800 ************************************ 00:06:30.800 22:03:56 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:30.800 22:03:56 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:30.800 22:03:56 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:30.800 22:03:56 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.800 22:03:56 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.800 22:03:56 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.800 22:03:56 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.800 22:03:56 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:30.800 22:03:56 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:30.800 22:03:56 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:30.800 22:03:56 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.800 22:03:56 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.800 22:03:56 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.800 22:03:56 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.800 22:03:56 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.800 22:03:56 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:30.800 22:03:56 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:30.800 -x option must be non-negative. 00:06:31.061 [2024-07-15 22:03:56.123087] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:31.061 accel_perf options: 00:06:31.061 [-h help message] 00:06:31.061 [-q queue depth per core] 00:06:31.061 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:31.061 [-T number of threads per core 00:06:31.061 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:31.061 [-t time in seconds] 00:06:31.061 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:31.061 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:31.061 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:31.061 [-l for compress/decompress workloads, name of uncompressed input file 00:06:31.061 [-S for crc32c workload, use this seed value (default 0) 00:06:31.061 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:31.061 [-f for fill workload, use this BYTE value (default 255) 00:06:31.061 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:31.061 [-y verify result if this switch is on] 00:06:31.061 [-a tasks to allocate per core (default: same value as -q)] 00:06:31.061 Can be used to spread operations across a wider range of memory. 00:06:31.061 22:03:56 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:31.061 22:03:56 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:31.061 22:03:56 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:31.061 22:03:56 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:31.061 00:06:31.061 real 0m0.036s 00:06:31.061 user 0m0.024s 00:06:31.061 sys 0m0.012s 00:06:31.061 22:03:56 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.061 22:03:56 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:31.061 ************************************ 00:06:31.061 END TEST accel_negative_buffers 00:06:31.061 ************************************ 00:06:31.061 Error: writing output failed: Broken pipe 00:06:31.061 22:03:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.061 22:03:56 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:31.061 22:03:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:31.061 22:03:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.061 22:03:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.061 ************************************ 00:06:31.061 START TEST accel_crc32c 00:06:31.061 ************************************ 00:06:31.061 22:03:56 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:31.061 22:03:56 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:31.061 22:03:56 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:31.061 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.061 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.061 22:03:56 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:31.061 22:03:56 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:31.061 22:03:56 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:31.061 22:03:56 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.061 22:03:56 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.061 22:03:56 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.061 22:03:56 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.061 22:03:56 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.061 22:03:56 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:31.061 22:03:56 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:31.061 [2024-07-15 22:03:56.232727] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:31.061 [2024-07-15 22:03:56.232793] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2570466 ] 00:06:31.061 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.061 [2024-07-15 22:03:56.296047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.061 [2024-07-15 22:03:56.368120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.322 22:03:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:32.288 22:03:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.288 00:06:32.288 real 0m1.292s 00:06:32.288 user 0m1.206s 00:06:32.288 sys 0m0.099s 00:06:32.288 22:03:57 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.288 22:03:57 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:32.288 ************************************ 00:06:32.288 END TEST accel_crc32c 00:06:32.288 ************************************ 00:06:32.288 22:03:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.288 22:03:57 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:32.288 22:03:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:32.288 22:03:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.288 22:03:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.288 ************************************ 00:06:32.288 START TEST accel_crc32c_C2 00:06:32.288 ************************************ 00:06:32.288 22:03:57 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:32.288 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.288 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:32.288 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.288 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.288 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:32.288 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:32.288 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.288 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.288 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.288 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.288 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.288 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.288 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.288 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:32.288 [2024-07-15 22:03:57.602050] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:32.288 [2024-07-15 22:03:57.602118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2570788 ] 00:06:32.550 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.550 [2024-07-15 22:03:57.665897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.550 [2024-07-15 22:03:57.735451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:32.550 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.551 22:03:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.932 00:06:33.932 real 0m1.291s 00:06:33.932 user 0m1.202s 00:06:33.932 sys 0m0.101s 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.932 22:03:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:33.932 ************************************ 00:06:33.932 END TEST accel_crc32c_C2 00:06:33.932 ************************************ 00:06:33.932 22:03:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.932 22:03:58 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:33.932 22:03:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:33.932 22:03:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.932 22:03:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.932 ************************************ 00:06:33.932 START TEST accel_copy 00:06:33.932 ************************************ 00:06:33.932 22:03:58 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:33.932 22:03:58 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:33.933 22:03:58 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:33.933 22:03:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 22:03:58 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:33.933 22:03:58 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:33.933 22:03:58 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:33.933 22:03:58 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.933 22:03:58 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.933 22:03:58 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.933 22:03:58 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.933 22:03:58 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.933 22:03:58 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:33.933 22:03:58 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:33.933 [2024-07-15 22:03:58.969851] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:33.933 [2024-07-15 22:03:58.969918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2570977 ] 00:06:33.933 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.933 [2024-07-15 22:03:59.033008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.933 [2024-07-15 22:03:59.104829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.933 22:03:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:35.315 22:04:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.315 00:06:35.315 real 0m1.293s 00:06:35.315 user 0m1.201s 00:06:35.315 sys 0m0.102s 00:06:35.315 22:04:00 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.315 22:04:00 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:35.315 ************************************ 00:06:35.315 END TEST accel_copy 00:06:35.315 ************************************ 00:06:35.315 22:04:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.315 22:04:00 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.315 22:04:00 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:35.315 22:04:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.315 22:04:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.315 ************************************ 00:06:35.315 START TEST accel_fill 00:06:35.315 ************************************ 00:06:35.315 22:04:00 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:35.315 [2024-07-15 22:04:00.335917] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:35.315 [2024-07-15 22:04:00.335983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2571207 ] 00:06:35.315 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.315 [2024-07-15 22:04:00.396625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.315 [2024-07-15 22:04:00.461678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.315 22:04:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:36.698 22:04:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.698 00:06:36.698 real 0m1.283s 00:06:36.698 user 0m1.186s 00:06:36.698 sys 0m0.109s 00:06:36.698 22:04:01 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.698 22:04:01 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:36.698 ************************************ 00:06:36.698 END TEST accel_fill 00:06:36.698 ************************************ 00:06:36.698 22:04:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.698 22:04:01 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:36.698 22:04:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:36.698 22:04:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.698 22:04:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.698 ************************************ 00:06:36.698 START TEST accel_copy_crc32c 00:06:36.698 ************************************ 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:36.698 [2024-07-15 22:04:01.692607] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:36.698 [2024-07-15 22:04:01.692668] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2571556 ] 00:06:36.698 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.698 [2024-07-15 22:04:01.753461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.698 [2024-07-15 22:04:01.819242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.698 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.699 22:04:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.640 00:06:37.640 real 0m1.284s 00:06:37.640 user 0m1.189s 00:06:37.640 sys 0m0.107s 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.640 22:04:02 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:37.640 ************************************ 00:06:37.640 END TEST accel_copy_crc32c 00:06:37.640 ************************************ 00:06:37.901 22:04:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.901 22:04:02 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:37.901 22:04:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:37.901 22:04:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.901 22:04:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.901 ************************************ 00:06:37.901 START TEST accel_copy_crc32c_C2 00:06:37.901 ************************************ 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:37.901 [2024-07-15 22:04:03.051385] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:37.901 [2024-07-15 22:04:03.051484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2571906 ] 00:06:37.901 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.901 [2024-07-15 22:04:03.112602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.901 [2024-07-15 22:04:03.178131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:37.901 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.902 22:04:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.284 00:06:39.284 real 0m1.285s 00:06:39.284 user 0m1.201s 00:06:39.284 sys 0m0.096s 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.284 22:04:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:39.284 ************************************ 00:06:39.284 END TEST accel_copy_crc32c_C2 00:06:39.284 ************************************ 00:06:39.284 22:04:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.284 22:04:04 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:39.284 22:04:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:39.284 22:04:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.284 22:04:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.284 ************************************ 00:06:39.284 START TEST accel_dualcast 00:06:39.284 ************************************ 00:06:39.284 22:04:04 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:39.284 [2024-07-15 22:04:04.409252] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:39.284 [2024-07-15 22:04:04.409334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2572259 ] 00:06:39.284 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.284 [2024-07-15 22:04:04.471396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.284 [2024-07-15 22:04:04.541746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.284 22:04:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.285 22:04:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:40.668 22:04:05 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.668 00:06:40.668 real 0m1.290s 00:06:40.668 user 0m1.200s 00:06:40.668 sys 0m0.102s 00:06:40.668 22:04:05 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.668 22:04:05 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:40.668 ************************************ 00:06:40.668 END TEST accel_dualcast 00:06:40.668 ************************************ 00:06:40.668 22:04:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.668 22:04:05 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:40.668 22:04:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:40.668 22:04:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.668 22:04:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.668 ************************************ 00:06:40.668 START TEST accel_compare 00:06:40.668 ************************************ 00:06:40.668 22:04:05 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:40.668 22:04:05 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:40.668 22:04:05 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:40.668 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:40.669 [2024-07-15 22:04:05.758571] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:40.669 [2024-07-15 22:04:05.758631] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2572421 ] 00:06:40.669 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.669 [2024-07-15 22:04:05.819641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.669 [2024-07-15 22:04:05.886263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.669 22:04:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:42.052 22:04:07 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.052 00:06:42.052 real 0m1.286s 00:06:42.052 user 0m1.200s 00:06:42.052 sys 0m0.098s 00:06:42.052 22:04:07 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.052 22:04:07 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:42.052 ************************************ 00:06:42.052 END TEST accel_compare 00:06:42.052 ************************************ 00:06:42.052 22:04:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.052 22:04:07 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:42.052 22:04:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:42.052 22:04:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.053 22:04:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.053 ************************************ 00:06:42.053 START TEST accel_xor 00:06:42.053 ************************************ 00:06:42.053 22:04:07 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:42.053 [2024-07-15 22:04:07.113331] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:42.053 [2024-07-15 22:04:07.113402] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2572644 ] 00:06:42.053 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.053 [2024-07-15 22:04:07.175578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.053 [2024-07-15 22:04:07.243675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.053 22:04:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.436 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.437 00:06:43.437 real 0m1.287s 00:06:43.437 user 0m1.182s 00:06:43.437 sys 0m0.116s 00:06:43.437 22:04:08 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.437 22:04:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:43.437 ************************************ 00:06:43.437 END TEST accel_xor 00:06:43.437 ************************************ 00:06:43.437 22:04:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.437 22:04:08 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:43.437 22:04:08 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:43.437 22:04:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.437 22:04:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.437 ************************************ 00:06:43.437 START TEST accel_xor 00:06:43.437 ************************************ 00:06:43.437 22:04:08 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:43.437 [2024-07-15 22:04:08.471919] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:43.437 [2024-07-15 22:04:08.471982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2572998 ] 00:06:43.437 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.437 [2024-07-15 22:04:08.532570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.437 [2024-07-15 22:04:08.598231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.437 22:04:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:44.821 22:04:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.821 00:06:44.821 real 0m1.283s 00:06:44.821 user 0m1.195s 00:06:44.821 sys 0m0.100s 00:06:44.821 22:04:09 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.821 22:04:09 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:44.821 ************************************ 00:06:44.821 END TEST accel_xor 00:06:44.822 ************************************ 00:06:44.822 22:04:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.822 22:04:09 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:44.822 22:04:09 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:44.822 22:04:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.822 22:04:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.822 ************************************ 00:06:44.822 START TEST accel_dif_verify 00:06:44.822 ************************************ 00:06:44.822 22:04:09 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:44.822 [2024-07-15 22:04:09.829609] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:44.822 [2024-07-15 22:04:09.829728] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2573347 ] 00:06:44.822 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.822 [2024-07-15 22:04:09.896726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.822 [2024-07-15 22:04:09.964780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.822 22:04:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:46.204 22:04:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.204 00:06:46.204 real 0m1.297s 00:06:46.204 user 0m1.205s 00:06:46.204 sys 0m0.106s 00:06:46.204 22:04:11 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.204 22:04:11 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:46.204 ************************************ 00:06:46.204 END TEST accel_dif_verify 00:06:46.204 ************************************ 00:06:46.204 22:04:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.204 22:04:11 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:46.204 22:04:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:46.204 22:04:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.204 22:04:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.204 ************************************ 00:06:46.204 START TEST accel_dif_generate 00:06:46.204 ************************************ 00:06:46.204 22:04:11 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:46.204 [2024-07-15 22:04:11.198515] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:46.204 [2024-07-15 22:04:11.198582] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2573694 ] 00:06:46.204 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.204 [2024-07-15 22:04:11.269752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.204 [2024-07-15 22:04:11.340133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.204 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.205 22:04:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:47.149 22:04:12 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.149 00:06:47.149 real 0m1.301s 00:06:47.149 user 0m1.198s 00:06:47.149 sys 0m0.115s 00:06:47.149 22:04:12 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.149 22:04:12 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:47.149 ************************************ 00:06:47.149 END TEST accel_dif_generate 00:06:47.149 ************************************ 00:06:47.410 22:04:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.410 22:04:12 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:47.410 22:04:12 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:47.410 22:04:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.410 22:04:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.410 ************************************ 00:06:47.410 START TEST accel_dif_generate_copy 00:06:47.410 ************************************ 00:06:47.410 22:04:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:47.410 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:47.410 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:47.410 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.410 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.410 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:47.410 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:47.410 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:47.410 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.410 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.410 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.410 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.410 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.410 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:47.410 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:47.410 [2024-07-15 22:04:12.575593] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:47.410 [2024-07-15 22:04:12.575659] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2573909 ] 00:06:47.410 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.410 [2024-07-15 22:04:12.635978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.410 [2024-07-15 22:04:12.702719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.701 22:04:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.640 00:06:48.640 real 0m1.284s 00:06:48.640 user 0m1.198s 00:06:48.640 sys 0m0.099s 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.640 22:04:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:48.640 ************************************ 00:06:48.640 END TEST accel_dif_generate_copy 00:06:48.640 ************************************ 00:06:48.640 22:04:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.640 22:04:13 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:48.640 22:04:13 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.640 22:04:13 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:48.640 22:04:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.640 22:04:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.640 ************************************ 00:06:48.640 START TEST accel_comp 00:06:48.640 ************************************ 00:06:48.640 22:04:13 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.640 22:04:13 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:48.640 22:04:13 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:48.640 22:04:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.640 22:04:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.640 22:04:13 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.640 22:04:13 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.640 22:04:13 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:48.640 22:04:13 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.640 22:04:13 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.640 22:04:13 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.640 22:04:13 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.640 22:04:13 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.640 22:04:13 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:48.640 22:04:13 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:48.640 [2024-07-15 22:04:13.938255] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:48.640 [2024-07-15 22:04:13.938340] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2574109 ] 00:06:48.899 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.900 [2024-07-15 22:04:14.000118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.900 [2024-07-15 22:04:14.066748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.900 22:04:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:50.283 22:04:15 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.283 00:06:50.283 real 0m1.289s 00:06:50.283 user 0m1.207s 00:06:50.283 sys 0m0.095s 00:06:50.283 22:04:15 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.283 22:04:15 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:50.283 ************************************ 00:06:50.283 END TEST accel_comp 00:06:50.283 ************************************ 00:06:50.283 22:04:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.283 22:04:15 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.283 22:04:15 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:50.283 22:04:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.283 22:04:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.283 ************************************ 00:06:50.283 START TEST accel_decomp 00:06:50.283 ************************************ 00:06:50.283 22:04:15 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:50.283 [2024-07-15 22:04:15.301256] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:50.283 [2024-07-15 22:04:15.301354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2574437 ] 00:06:50.283 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.283 [2024-07-15 22:04:15.362694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.283 [2024-07-15 22:04:15.427149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.283 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.284 22:04:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:51.666 22:04:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.666 00:06:51.666 real 0m1.286s 00:06:51.666 user 0m1.207s 00:06:51.666 sys 0m0.092s 00:06:51.666 22:04:16 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.666 22:04:16 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:51.666 ************************************ 00:06:51.666 END TEST accel_decomp 00:06:51.666 ************************************ 00:06:51.666 22:04:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.666 22:04:16 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:51.666 22:04:16 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:51.666 22:04:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.666 22:04:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.666 ************************************ 00:06:51.666 START TEST accel_decomp_full 00:06:51.666 ************************************ 00:06:51.666 22:04:16 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:51.666 22:04:16 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:51.666 22:04:16 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:51.666 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.666 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.666 22:04:16 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:51.666 22:04:16 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:51.666 22:04:16 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:51.666 22:04:16 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.666 22:04:16 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.666 22:04:16 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:51.667 [2024-07-15 22:04:16.661285] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:51.667 [2024-07-15 22:04:16.661344] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2574790 ] 00:06:51.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.667 [2024-07-15 22:04:16.722773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.667 [2024-07-15 22:04:16.788935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.667 22:04:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.606 22:04:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.866 22:04:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.866 22:04:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.866 22:04:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.866 22:04:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.866 22:04:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.866 22:04:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:52.866 22:04:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.866 00:06:52.866 real 0m1.297s 00:06:52.866 user 0m1.203s 00:06:52.866 sys 0m0.106s 00:06:52.866 22:04:17 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.866 22:04:17 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:52.866 ************************************ 00:06:52.866 END TEST accel_decomp_full 00:06:52.866 ************************************ 00:06:52.866 22:04:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.866 22:04:17 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.866 22:04:17 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:52.866 22:04:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.866 22:04:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.866 ************************************ 00:06:52.866 START TEST accel_decomp_mcore 00:06:52.866 ************************************ 00:06:52.866 22:04:18 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.866 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:52.866 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:52.866 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.866 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.866 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.866 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.866 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:52.866 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.866 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.866 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.866 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.866 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.866 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:52.866 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:52.866 [2024-07-15 22:04:18.031535] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:52.866 [2024-07-15 22:04:18.031627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575141 ] 00:06:52.866 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.866 [2024-07-15 22:04:18.091992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.866 [2024-07-15 22:04:18.158414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.866 [2024-07-15 22:04:18.158528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.866 [2024-07-15 22:04:18.158685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.866 [2024-07-15 22:04:18.158685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.125 22:04:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.066 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.067 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.067 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.067 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.067 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.067 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.067 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:54.067 22:04:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.067 00:06:54.067 real 0m1.295s 00:06:54.067 user 0m4.430s 00:06:54.067 sys 0m0.112s 00:06:54.067 22:04:19 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.067 22:04:19 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:54.067 ************************************ 00:06:54.067 END TEST accel_decomp_mcore 00:06:54.067 ************************************ 00:06:54.067 22:04:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.067 22:04:19 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:54.067 22:04:19 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:54.067 22:04:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.067 22:04:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.067 ************************************ 00:06:54.067 START TEST accel_decomp_full_mcore 00:06:54.067 ************************************ 00:06:54.067 22:04:19 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:54.067 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:54.067 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:54.067 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.067 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.067 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:54.067 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:54.067 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:54.067 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.067 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.067 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.067 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.067 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.067 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:54.067 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:54.329 [2024-07-15 22:04:19.402824] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:54.329 [2024-07-15 22:04:19.402887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575387 ] 00:06:54.329 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.329 [2024-07-15 22:04:19.466298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.329 [2024-07-15 22:04:19.540922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.329 [2024-07-15 22:04:19.541035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.329 [2024-07-15 22:04:19.541180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.329 [2024-07-15 22:04:19.541180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.329 22:04:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.715 00:06:55.715 real 0m1.320s 00:06:55.715 user 0m4.488s 00:06:55.715 sys 0m0.117s 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.715 22:04:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:55.715 ************************************ 00:06:55.715 END TEST accel_decomp_full_mcore 00:06:55.715 ************************************ 00:06:55.715 22:04:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.715 22:04:20 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:55.715 22:04:20 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:55.715 22:04:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.715 22:04:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.715 ************************************ 00:06:55.715 START TEST accel_decomp_mthread 00:06:55.715 ************************************ 00:06:55.715 22:04:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:55.716 [2024-07-15 22:04:20.800877] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:55.716 [2024-07-15 22:04:20.800943] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575602 ] 00:06:55.716 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.716 [2024-07-15 22:04:20.864520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.716 [2024-07-15 22:04:20.934710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.716 22:04:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.101 00:06:57.101 real 0m1.301s 00:06:57.101 user 0m1.207s 00:06:57.101 sys 0m0.107s 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.101 22:04:22 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:57.101 ************************************ 00:06:57.101 END TEST accel_decomp_mthread 00:06:57.101 ************************************ 00:06:57.101 22:04:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.101 22:04:22 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:57.101 22:04:22 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:57.101 22:04:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.101 22:04:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.101 ************************************ 00:06:57.101 START TEST accel_decomp_full_mthread 00:06:57.101 ************************************ 00:06:57.101 22:04:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:57.101 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:57.101 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:57.101 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.101 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.101 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:57.102 [2024-07-15 22:04:22.173868] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:57.102 [2024-07-15 22:04:22.173959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575898 ] 00:06:57.102 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.102 [2024-07-15 22:04:22.234945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.102 [2024-07-15 22:04:22.300476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.102 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.103 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.103 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.103 22:04:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.486 00:06:58.486 real 0m1.319s 00:06:58.486 user 0m1.231s 00:06:58.486 sys 0m0.101s 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.486 22:04:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:58.486 ************************************ 00:06:58.486 END TEST accel_decomp_full_mthread 00:06:58.486 ************************************ 00:06:58.486 22:04:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.486 22:04:23 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:58.486 22:04:23 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:58.486 22:04:23 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:58.486 22:04:23 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:58.486 22:04:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.486 22:04:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.486 22:04:23 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.486 22:04:23 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.486 22:04:23 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.486 22:04:23 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.486 22:04:23 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.486 22:04:23 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:58.486 22:04:23 accel -- accel/accel.sh@41 -- # jq -r . 00:06:58.486 ************************************ 00:06:58.486 START TEST accel_dif_functional_tests 00:06:58.486 ************************************ 00:06:58.486 22:04:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:58.486 [2024-07-15 22:04:23.590175] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:58.486 [2024-07-15 22:04:23.590227] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2576255 ] 00:06:58.486 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.486 [2024-07-15 22:04:23.650856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.486 [2024-07-15 22:04:23.722495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.486 [2024-07-15 22:04:23.722609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.486 [2024-07-15 22:04:23.722612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.486 00:06:58.486 00:06:58.486 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.486 http://cunit.sourceforge.net/ 00:06:58.486 00:06:58.486 00:06:58.486 Suite: accel_dif 00:06:58.486 Test: verify: DIF generated, GUARD check ...passed 00:06:58.486 Test: verify: DIF generated, APPTAG check ...passed 00:06:58.486 Test: verify: DIF generated, REFTAG check ...passed 00:06:58.486 Test: verify: DIF not generated, GUARD check ...[2024-07-15 22:04:23.778183] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:58.486 passed 00:06:58.486 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 22:04:23.778226] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:58.486 passed 00:06:58.486 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 22:04:23.778247] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:58.486 passed 00:06:58.486 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:58.486 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 22:04:23.778295] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:58.486 passed 00:06:58.486 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:58.486 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:58.486 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:58.486 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 22:04:23.778408] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:58.486 passed 00:06:58.486 Test: verify copy: DIF generated, GUARD check ...passed 00:06:58.486 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:58.486 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:58.486 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 22:04:23.778531] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:58.486 passed 00:06:58.487 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 22:04:23.778553] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:58.487 passed 00:06:58.487 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 22:04:23.778575] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:58.487 passed 00:06:58.487 Test: generate copy: DIF generated, GUARD check ...passed 00:06:58.487 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:58.487 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:58.487 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:58.487 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:58.487 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:58.487 Test: generate copy: iovecs-len validate ...[2024-07-15 22:04:23.778757] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:58.487 passed 00:06:58.487 Test: generate copy: buffer alignment validate ...passed 00:06:58.487 00:06:58.487 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.487 suites 1 1 n/a 0 0 00:06:58.487 tests 26 26 26 0 0 00:06:58.487 asserts 115 115 115 0 n/a 00:06:58.487 00:06:58.487 Elapsed time = 0.002 seconds 00:06:58.747 00:06:58.747 real 0m0.352s 00:06:58.747 user 0m0.451s 00:06:58.747 sys 0m0.121s 00:06:58.747 22:04:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.747 22:04:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:58.747 ************************************ 00:06:58.747 END TEST accel_dif_functional_tests 00:06:58.747 ************************************ 00:06:58.747 22:04:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.747 00:06:58.747 real 0m30.048s 00:06:58.747 user 0m33.650s 00:06:58.747 sys 0m4.131s 00:06:58.747 22:04:23 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.747 22:04:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.747 ************************************ 00:06:58.747 END TEST accel 00:06:58.747 ************************************ 00:06:58.747 22:04:23 -- common/autotest_common.sh@1142 -- # return 0 00:06:58.747 22:04:23 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:58.747 22:04:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.747 22:04:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.747 22:04:23 -- common/autotest_common.sh@10 -- # set +x 00:06:58.747 ************************************ 00:06:58.747 START TEST accel_rpc 00:06:58.747 ************************************ 00:06:58.747 22:04:24 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:59.008 * Looking for test storage... 00:06:59.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:59.008 22:04:24 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:59.008 22:04:24 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:59.008 22:04:24 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2576344 00:06:59.008 22:04:24 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2576344 00:06:59.008 22:04:24 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2576344 ']' 00:06:59.008 22:04:24 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.008 22:04:24 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.008 22:04:24 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.008 22:04:24 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.008 22:04:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.008 [2024-07-15 22:04:24.154139] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:06:59.008 [2024-07-15 22:04:24.154202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2576344 ] 00:06:59.008 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.008 [2024-07-15 22:04:24.214462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.008 [2024-07-15 22:04:24.280885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.950 22:04:24 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.950 22:04:24 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:59.950 22:04:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:59.950 22:04:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:59.950 22:04:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:59.950 22:04:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:59.950 22:04:24 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:59.950 22:04:24 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.950 22:04:24 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.950 22:04:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.950 ************************************ 00:06:59.950 START TEST accel_assign_opcode 00:06:59.950 ************************************ 00:06:59.950 22:04:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:59.950 22:04:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:59.950 22:04:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.950 22:04:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.950 [2024-07-15 22:04:24.954835] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:59.950 22:04:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.950 22:04:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:59.950 22:04:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.950 22:04:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.950 [2024-07-15 22:04:24.966863] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:59.950 22:04:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.950 22:04:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:59.950 22:04:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.950 22:04:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.950 22:04:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.950 22:04:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:59.950 22:04:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.950 22:04:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:59.950 22:04:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.950 22:04:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:59.950 22:04:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.950 software 00:06:59.950 00:06:59.950 real 0m0.208s 00:06:59.950 user 0m0.051s 00:06:59.950 sys 0m0.009s 00:06:59.950 22:04:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.951 22:04:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.951 ************************************ 00:06:59.951 END TEST accel_assign_opcode 00:06:59.951 ************************************ 00:06:59.951 22:04:25 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:59.951 22:04:25 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2576344 00:06:59.951 22:04:25 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2576344 ']' 00:06:59.951 22:04:25 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2576344 00:06:59.951 22:04:25 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:59.951 22:04:25 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.951 22:04:25 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2576344 00:06:59.951 22:04:25 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.951 22:04:25 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.951 22:04:25 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2576344' 00:06:59.951 killing process with pid 2576344 00:06:59.951 22:04:25 accel_rpc -- common/autotest_common.sh@967 -- # kill 2576344 00:06:59.951 22:04:25 accel_rpc -- common/autotest_common.sh@972 -- # wait 2576344 00:07:00.211 00:07:00.211 real 0m1.447s 00:07:00.211 user 0m1.540s 00:07:00.211 sys 0m0.382s 00:07:00.211 22:04:25 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.211 22:04:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.211 ************************************ 00:07:00.211 END TEST accel_rpc 00:07:00.211 ************************************ 00:07:00.211 22:04:25 -- common/autotest_common.sh@1142 -- # return 0 00:07:00.211 22:04:25 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:00.212 22:04:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.212 22:04:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.212 22:04:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.473 ************************************ 00:07:00.473 START TEST app_cmdline 00:07:00.473 ************************************ 00:07:00.473 22:04:25 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:00.473 * Looking for test storage... 00:07:00.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:00.473 22:04:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:00.473 22:04:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2576734 00:07:00.473 22:04:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2576734 00:07:00.473 22:04:25 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:00.473 22:04:25 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2576734 ']' 00:07:00.473 22:04:25 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.473 22:04:25 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.473 22:04:25 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.473 22:04:25 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.473 22:04:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.473 [2024-07-15 22:04:25.703559] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:07:00.473 [2024-07-15 22:04:25.703627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2576734 ] 00:07:00.473 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.473 [2024-07-15 22:04:25.771818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.734 [2024-07-15 22:04:25.846422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.305 22:04:26 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.305 22:04:26 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:01.305 22:04:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:01.305 { 00:07:01.305 "version": "SPDK v24.09-pre git sha1 a940d3681", 00:07:01.305 "fields": { 00:07:01.305 "major": 24, 00:07:01.305 "minor": 9, 00:07:01.305 "patch": 0, 00:07:01.305 "suffix": "-pre", 00:07:01.305 "commit": "a940d3681" 00:07:01.305 } 00:07:01.305 } 00:07:01.305 22:04:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:01.305 22:04:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:01.305 22:04:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:01.305 22:04:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:01.566 22:04:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.566 22:04:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:01.566 22:04:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.566 22:04:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:01.566 22:04:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:01.566 22:04:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.566 request: 00:07:01.566 { 00:07:01.566 "method": "env_dpdk_get_mem_stats", 00:07:01.566 "req_id": 1 00:07:01.566 } 00:07:01.566 Got JSON-RPC error response 00:07:01.566 response: 00:07:01.566 { 00:07:01.566 "code": -32601, 00:07:01.566 "message": "Method not found" 00:07:01.566 } 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.566 22:04:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2576734 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2576734 ']' 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2576734 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.566 22:04:26 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2576734 00:07:01.827 22:04:26 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.827 22:04:26 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.827 22:04:26 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2576734' 00:07:01.827 killing process with pid 2576734 00:07:01.827 22:04:26 app_cmdline -- common/autotest_common.sh@967 -- # kill 2576734 00:07:01.827 22:04:26 app_cmdline -- common/autotest_common.sh@972 -- # wait 2576734 00:07:01.827 00:07:01.827 real 0m1.579s 00:07:01.827 user 0m1.896s 00:07:01.827 sys 0m0.417s 00:07:01.827 22:04:27 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.827 22:04:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.827 ************************************ 00:07:01.827 END TEST app_cmdline 00:07:01.827 ************************************ 00:07:02.089 22:04:27 -- common/autotest_common.sh@1142 -- # return 0 00:07:02.089 22:04:27 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:02.089 22:04:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.089 22:04:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.089 22:04:27 -- common/autotest_common.sh@10 -- # set +x 00:07:02.089 ************************************ 00:07:02.089 START TEST version 00:07:02.089 ************************************ 00:07:02.089 22:04:27 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:02.089 * Looking for test storage... 00:07:02.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:02.089 22:04:27 version -- app/version.sh@17 -- # get_header_version major 00:07:02.089 22:04:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:02.089 22:04:27 version -- app/version.sh@14 -- # cut -f2 00:07:02.089 22:04:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.089 22:04:27 version -- app/version.sh@17 -- # major=24 00:07:02.089 22:04:27 version -- app/version.sh@18 -- # get_header_version minor 00:07:02.089 22:04:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:02.089 22:04:27 version -- app/version.sh@14 -- # cut -f2 00:07:02.089 22:04:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.089 22:04:27 version -- app/version.sh@18 -- # minor=9 00:07:02.089 22:04:27 version -- app/version.sh@19 -- # get_header_version patch 00:07:02.089 22:04:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:02.089 22:04:27 version -- app/version.sh@14 -- # cut -f2 00:07:02.089 22:04:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.089 22:04:27 version -- app/version.sh@19 -- # patch=0 00:07:02.089 22:04:27 version -- app/version.sh@20 -- # get_header_version suffix 00:07:02.089 22:04:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:02.089 22:04:27 version -- app/version.sh@14 -- # cut -f2 00:07:02.089 22:04:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.089 22:04:27 version -- app/version.sh@20 -- # suffix=-pre 00:07:02.089 22:04:27 version -- app/version.sh@22 -- # version=24.9 00:07:02.089 22:04:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:02.089 22:04:27 version -- app/version.sh@28 -- # version=24.9rc0 00:07:02.089 22:04:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:02.089 22:04:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:02.089 22:04:27 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:02.089 22:04:27 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:02.089 00:07:02.089 real 0m0.171s 00:07:02.089 user 0m0.092s 00:07:02.089 sys 0m0.118s 00:07:02.089 22:04:27 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.089 22:04:27 version -- common/autotest_common.sh@10 -- # set +x 00:07:02.089 ************************************ 00:07:02.089 END TEST version 00:07:02.089 ************************************ 00:07:02.089 22:04:27 -- common/autotest_common.sh@1142 -- # return 0 00:07:02.089 22:04:27 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:02.089 22:04:27 -- spdk/autotest.sh@198 -- # uname -s 00:07:02.089 22:04:27 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:02.089 22:04:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:02.089 22:04:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:02.089 22:04:27 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:02.089 22:04:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:02.089 22:04:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:02.089 22:04:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:02.089 22:04:27 -- common/autotest_common.sh@10 -- # set +x 00:07:02.351 22:04:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:02.351 22:04:27 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:02.351 22:04:27 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:02.351 22:04:27 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:02.351 22:04:27 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:02.351 22:04:27 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:02.351 22:04:27 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:02.351 22:04:27 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:02.351 22:04:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.351 22:04:27 -- common/autotest_common.sh@10 -- # set +x 00:07:02.351 ************************************ 00:07:02.351 START TEST nvmf_tcp 00:07:02.351 ************************************ 00:07:02.351 22:04:27 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:02.351 * Looking for test storage... 00:07:02.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.351 22:04:27 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.351 22:04:27 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.351 22:04:27 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.351 22:04:27 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.351 22:04:27 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.351 22:04:27 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.351 22:04:27 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:02.351 22:04:27 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:02.351 22:04:27 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:02.351 22:04:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:02.351 22:04:27 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:02.351 22:04:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:02.351 22:04:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.351 22:04:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:02.351 ************************************ 00:07:02.351 START TEST nvmf_example 00:07:02.351 ************************************ 00:07:02.351 22:04:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:02.613 * Looking for test storage... 00:07:02.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:02.613 22:04:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:09.251 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:09.251 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.251 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:09.252 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:09.252 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:09.252 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:09.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:07:09.512 00:07:09.512 --- 10.0.0.2 ping statistics --- 00:07:09.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.512 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:09.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:07:09.512 00:07:09.512 --- 10.0.0.1 ping statistics --- 00:07:09.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.512 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2580947 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2580947 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2580947 ']' 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.512 22:04:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.512 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.449 22:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:10.450 22:04:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:10.450 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.445 Initializing NVMe Controllers 00:07:20.445 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:20.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:20.445 Initialization complete. Launching workers. 00:07:20.445 ======================================================== 00:07:20.445 Latency(us) 00:07:20.445 Device Information : IOPS MiB/s Average min max 00:07:20.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16459.80 64.30 3889.59 780.70 16260.23 00:07:20.445 ======================================================== 00:07:20.445 Total : 16459.80 64.30 3889.59 780.70 16260.23 00:07:20.445 00:07:20.704 22:04:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:20.705 rmmod nvme_tcp 00:07:20.705 rmmod nvme_fabrics 00:07:20.705 rmmod nvme_keyring 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2580947 ']' 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2580947 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2580947 ']' 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2580947 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2580947 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2580947' 00:07:20.705 killing process with pid 2580947 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 2580947 00:07:20.705 22:04:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 2580947 00:07:20.705 nvmf threads initialize successfully 00:07:20.705 bdev subsystem init successfully 00:07:20.705 created a nvmf target service 00:07:20.705 create targets's poll groups done 00:07:20.705 all subsystems of target started 00:07:20.705 nvmf target is running 00:07:20.705 all subsystems of target stopped 00:07:20.705 destroy targets's poll groups done 00:07:20.705 destroyed the nvmf target service 00:07:20.705 bdev subsystem finish successfully 00:07:20.705 nvmf threads destroy successfully 00:07:20.705 22:04:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:20.705 22:04:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:20.705 22:04:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:20.705 22:04:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:20.705 22:04:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:20.705 22:04:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.705 22:04:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.705 22:04:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.759 22:04:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:23.021 22:04:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:23.021 22:04:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:23.021 22:04:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.021 00:07:23.021 real 0m20.471s 00:07:23.021 user 0m46.084s 00:07:23.021 sys 0m6.193s 00:07:23.021 22:04:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.021 22:04:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.021 ************************************ 00:07:23.021 END TEST nvmf_example 00:07:23.021 ************************************ 00:07:23.021 22:04:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:23.021 22:04:48 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:23.021 22:04:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:23.021 22:04:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.021 22:04:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.021 ************************************ 00:07:23.021 START TEST nvmf_filesystem 00:07:23.021 ************************************ 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:23.021 * Looking for test storage... 00:07:23.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:23.021 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:23.022 22:04:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:23.285 22:04:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:23.285 22:04:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:23.285 22:04:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:23.285 22:04:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:23.285 22:04:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:23.285 22:04:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:23.285 22:04:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:23.285 22:04:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:23.285 22:04:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:23.285 22:04:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:23.285 22:04:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:23.285 22:04:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:23.285 22:04:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:23.285 22:04:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:23.285 #define SPDK_CONFIG_H 00:07:23.285 #define SPDK_CONFIG_APPS 1 00:07:23.285 #define SPDK_CONFIG_ARCH native 00:07:23.285 #undef SPDK_CONFIG_ASAN 00:07:23.285 #undef SPDK_CONFIG_AVAHI 00:07:23.285 #undef SPDK_CONFIG_CET 00:07:23.285 #define SPDK_CONFIG_COVERAGE 1 00:07:23.285 #define SPDK_CONFIG_CROSS_PREFIX 00:07:23.285 #undef SPDK_CONFIG_CRYPTO 00:07:23.285 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:23.285 #undef SPDK_CONFIG_CUSTOMOCF 00:07:23.285 #undef SPDK_CONFIG_DAOS 00:07:23.285 #define SPDK_CONFIG_DAOS_DIR 00:07:23.285 #define SPDK_CONFIG_DEBUG 1 00:07:23.285 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:23.285 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:23.285 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:23.285 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:23.285 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:23.285 #undef SPDK_CONFIG_DPDK_UADK 00:07:23.285 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:23.285 #define SPDK_CONFIG_EXAMPLES 1 00:07:23.285 #undef SPDK_CONFIG_FC 00:07:23.285 #define SPDK_CONFIG_FC_PATH 00:07:23.285 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:23.285 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:23.285 #undef SPDK_CONFIG_FUSE 00:07:23.285 #undef SPDK_CONFIG_FUZZER 00:07:23.285 #define SPDK_CONFIG_FUZZER_LIB 00:07:23.285 #undef SPDK_CONFIG_GOLANG 00:07:23.285 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:23.285 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:23.285 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:23.285 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:23.285 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:23.285 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:23.285 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:23.285 #define SPDK_CONFIG_IDXD 1 00:07:23.285 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:23.285 #undef SPDK_CONFIG_IPSEC_MB 00:07:23.285 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:23.285 #define SPDK_CONFIG_ISAL 1 00:07:23.285 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:23.285 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:23.285 #define SPDK_CONFIG_LIBDIR 00:07:23.285 #undef SPDK_CONFIG_LTO 00:07:23.285 #define SPDK_CONFIG_MAX_LCORES 128 00:07:23.285 #define SPDK_CONFIG_NVME_CUSE 1 00:07:23.285 #undef SPDK_CONFIG_OCF 00:07:23.285 #define SPDK_CONFIG_OCF_PATH 00:07:23.285 #define SPDK_CONFIG_OPENSSL_PATH 00:07:23.285 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:23.285 #define SPDK_CONFIG_PGO_DIR 00:07:23.285 #undef SPDK_CONFIG_PGO_USE 00:07:23.285 #define SPDK_CONFIG_PREFIX /usr/local 00:07:23.285 #undef SPDK_CONFIG_RAID5F 00:07:23.285 #undef SPDK_CONFIG_RBD 00:07:23.285 #define SPDK_CONFIG_RDMA 1 00:07:23.285 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:23.285 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:23.285 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:23.285 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:23.285 #define SPDK_CONFIG_SHARED 1 00:07:23.285 #undef SPDK_CONFIG_SMA 00:07:23.285 #define SPDK_CONFIG_TESTS 1 00:07:23.285 #undef SPDK_CONFIG_TSAN 00:07:23.285 #define SPDK_CONFIG_UBLK 1 00:07:23.285 #define SPDK_CONFIG_UBSAN 1 00:07:23.285 #undef SPDK_CONFIG_UNIT_TESTS 00:07:23.285 #undef SPDK_CONFIG_URING 00:07:23.285 #define SPDK_CONFIG_URING_PATH 00:07:23.285 #undef SPDK_CONFIG_URING_ZNS 00:07:23.285 #undef SPDK_CONFIG_USDT 00:07:23.285 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:23.285 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:23.285 #define SPDK_CONFIG_VFIO_USER 1 00:07:23.285 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:23.285 #define SPDK_CONFIG_VHOST 1 00:07:23.285 #define SPDK_CONFIG_VIRTIO 1 00:07:23.285 #undef SPDK_CONFIG_VTUNE 00:07:23.285 #define SPDK_CONFIG_VTUNE_DIR 00:07:23.285 #define SPDK_CONFIG_WERROR 1 00:07:23.285 #define SPDK_CONFIG_WPDK_DIR 00:07:23.285 #undef SPDK_CONFIG_XNVME 00:07:23.285 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:23.286 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:23.287 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2583797 ]] 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2583797 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.EXlc8Z 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.EXlc8Z/tests/target /tmp/spdk.EXlc8Z 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954236928 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330192896 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118636392448 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129371013120 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10734620672 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680796160 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864503296 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874202624 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9699328 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684019712 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1486848 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937097216 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937101312 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:23.288 * Looking for test storage... 00:07:23.288 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118636392448 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12949213184 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:23.289 22:04:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:31.422 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:31.422 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:31.422 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:31.422 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.422 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:31.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:07:31.423 00:07:31.423 --- 10.0.0.2 ping statistics --- 00:07:31.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.423 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.472 ms 00:07:31.423 00:07:31.423 --- 10.0.0.1 ping statistics --- 00:07:31.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.423 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.423 ************************************ 00:07:31.423 START TEST nvmf_filesystem_no_in_capsule 00:07:31.423 ************************************ 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2587566 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2587566 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2587566 ']' 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.423 22:04:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.423 [2024-07-15 22:04:55.653618] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:07:31.423 [2024-07-15 22:04:55.653679] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.423 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.423 [2024-07-15 22:04:55.723557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.423 [2024-07-15 22:04:55.800882] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.423 [2024-07-15 22:04:55.800920] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.423 [2024-07-15 22:04:55.800928] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.423 [2024-07-15 22:04:55.800934] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.423 [2024-07-15 22:04:55.800939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.423 [2024-07-15 22:04:55.801076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.423 [2024-07-15 22:04:55.801210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.423 [2024-07-15 22:04:55.801269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.423 [2024-07-15 22:04:55.801271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.423 [2024-07-15 22:04:56.482797] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.423 Malloc1 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.423 [2024-07-15 22:04:56.615410] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:31.423 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.424 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.424 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.424 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:31.424 { 00:07:31.424 "name": "Malloc1", 00:07:31.424 "aliases": [ 00:07:31.424 "e4e4c42b-1eb5-483a-b078-764b5da12ed3" 00:07:31.424 ], 00:07:31.424 "product_name": "Malloc disk", 00:07:31.424 "block_size": 512, 00:07:31.424 "num_blocks": 1048576, 00:07:31.424 "uuid": "e4e4c42b-1eb5-483a-b078-764b5da12ed3", 00:07:31.424 "assigned_rate_limits": { 00:07:31.424 "rw_ios_per_sec": 0, 00:07:31.424 "rw_mbytes_per_sec": 0, 00:07:31.424 "r_mbytes_per_sec": 0, 00:07:31.424 "w_mbytes_per_sec": 0 00:07:31.424 }, 00:07:31.424 "claimed": true, 00:07:31.424 "claim_type": "exclusive_write", 00:07:31.424 "zoned": false, 00:07:31.424 "supported_io_types": { 00:07:31.424 "read": true, 00:07:31.424 "write": true, 00:07:31.424 "unmap": true, 00:07:31.424 "flush": true, 00:07:31.424 "reset": true, 00:07:31.424 "nvme_admin": false, 00:07:31.424 "nvme_io": false, 00:07:31.424 "nvme_io_md": false, 00:07:31.424 "write_zeroes": true, 00:07:31.424 "zcopy": true, 00:07:31.424 "get_zone_info": false, 00:07:31.424 "zone_management": false, 00:07:31.424 "zone_append": false, 00:07:31.424 "compare": false, 00:07:31.424 "compare_and_write": false, 00:07:31.424 "abort": true, 00:07:31.424 "seek_hole": false, 00:07:31.424 "seek_data": false, 00:07:31.424 "copy": true, 00:07:31.424 "nvme_iov_md": false 00:07:31.424 }, 00:07:31.424 "memory_domains": [ 00:07:31.424 { 00:07:31.424 "dma_device_id": "system", 00:07:31.424 "dma_device_type": 1 00:07:31.424 }, 00:07:31.424 { 00:07:31.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.424 "dma_device_type": 2 00:07:31.424 } 00:07:31.424 ], 00:07:31.424 "driver_specific": {} 00:07:31.424 } 00:07:31.424 ]' 00:07:31.424 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:31.424 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:31.424 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:31.424 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:31.424 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:31.424 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:31.424 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:31.424 22:04:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:33.334 22:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:33.334 22:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:33.334 22:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:33.334 22:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:33.334 22:04:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:35.244 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:35.245 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:35.245 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:35.245 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:35.245 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:35.245 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:35.245 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:35.245 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:35.245 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:35.245 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:35.245 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:35.245 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:35.245 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:35.245 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:35.245 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:35.245 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:35.245 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:35.505 22:05:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:36.076 22:05:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:37.029 22:05:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:37.029 22:05:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:37.029 22:05:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:37.029 22:05:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.029 22:05:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.029 ************************************ 00:07:37.029 START TEST filesystem_ext4 00:07:37.029 ************************************ 00:07:37.029 22:05:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:37.029 22:05:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:37.029 22:05:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:37.029 22:05:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:37.029 22:05:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:37.029 22:05:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:37.029 22:05:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:37.029 22:05:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:37.029 22:05:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:37.029 22:05:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:37.029 22:05:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:37.029 mke2fs 1.46.5 (30-Dec-2021) 00:07:37.029 Discarding device blocks: 0/522240 done 00:07:37.289 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:37.289 Filesystem UUID: bf3af25c-6b90-4d77-a151-28f4f5f0ad83 00:07:37.289 Superblock backups stored on blocks: 00:07:37.289 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:37.289 00:07:37.289 Allocating group tables: 0/64 done 00:07:37.289 Writing inode tables: 0/64 done 00:07:39.829 Creating journal (8192 blocks): done 00:07:40.399 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:40.399 00:07:40.399 22:05:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:40.399 22:05:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2587566 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:40.971 00:07:40.971 real 0m3.995s 00:07:40.971 user 0m0.032s 00:07:40.971 sys 0m0.066s 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:40.971 ************************************ 00:07:40.971 END TEST filesystem_ext4 00:07:40.971 ************************************ 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.971 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.231 ************************************ 00:07:41.231 START TEST filesystem_btrfs 00:07:41.231 ************************************ 00:07:41.231 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:41.231 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:41.231 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.231 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:41.231 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:41.231 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:41.231 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:41.231 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:41.231 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:41.231 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:41.231 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:41.231 btrfs-progs v6.6.2 00:07:41.232 See https://btrfs.readthedocs.io for more information. 00:07:41.232 00:07:41.232 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:41.232 NOTE: several default settings have changed in version 5.15, please make sure 00:07:41.232 this does not affect your deployments: 00:07:41.232 - DUP for metadata (-m dup) 00:07:41.232 - enabled no-holes (-O no-holes) 00:07:41.232 - enabled free-space-tree (-R free-space-tree) 00:07:41.232 00:07:41.232 Label: (null) 00:07:41.232 UUID: 3cea2232-cb0e-4c5f-b5d8-40907b4d847e 00:07:41.232 Node size: 16384 00:07:41.232 Sector size: 4096 00:07:41.232 Filesystem size: 510.00MiB 00:07:41.232 Block group profiles: 00:07:41.232 Data: single 8.00MiB 00:07:41.232 Metadata: DUP 32.00MiB 00:07:41.232 System: DUP 8.00MiB 00:07:41.232 SSD detected: yes 00:07:41.232 Zoned device: no 00:07:41.232 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:41.232 Runtime features: free-space-tree 00:07:41.232 Checksum: crc32c 00:07:41.232 Number of devices: 1 00:07:41.232 Devices: 00:07:41.232 ID SIZE PATH 00:07:41.232 1 510.00MiB /dev/nvme0n1p1 00:07:41.232 00:07:41.232 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:41.232 22:05:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.171 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.171 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:42.171 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.171 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:42.171 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:42.171 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2587566 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.438 00:07:42.438 real 0m1.208s 00:07:42.438 user 0m0.039s 00:07:42.438 sys 0m0.121s 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:42.438 ************************************ 00:07:42.438 END TEST filesystem_btrfs 00:07:42.438 ************************************ 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.438 ************************************ 00:07:42.438 START TEST filesystem_xfs 00:07:42.438 ************************************ 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:42.438 22:05:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:42.438 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:42.438 = sectsz=512 attr=2, projid32bit=1 00:07:42.438 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:42.438 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:42.438 data = bsize=4096 blocks=130560, imaxpct=25 00:07:42.438 = sunit=0 swidth=0 blks 00:07:42.438 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:42.438 log =internal log bsize=4096 blocks=16384, version=2 00:07:42.438 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:42.438 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:43.377 Discarding blocks...Done. 00:07:43.377 22:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:43.377 22:05:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:45.288 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:45.288 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:45.288 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:45.288 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:45.288 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:45.288 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:45.549 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2587566 00:07:45.549 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:45.549 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:45.549 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:45.549 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:45.549 00:07:45.549 real 0m3.021s 00:07:45.549 user 0m0.032s 00:07:45.549 sys 0m0.068s 00:07:45.549 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.549 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:45.549 ************************************ 00:07:45.549 END TEST filesystem_xfs 00:07:45.549 ************************************ 00:07:45.549 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:45.549 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:45.809 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:45.809 22:05:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:45.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.809 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:45.809 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2587566 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2587566 ']' 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2587566 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2587566 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2587566' 00:07:45.810 killing process with pid 2587566 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2587566 00:07:45.810 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2587566 00:07:46.070 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:46.070 00:07:46.070 real 0m15.771s 00:07:46.070 user 1m2.206s 00:07:46.070 sys 0m1.278s 00:07:46.070 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.070 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:46.070 ************************************ 00:07:46.070 END TEST nvmf_filesystem_no_in_capsule 00:07:46.070 ************************************ 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:46.331 ************************************ 00:07:46.331 START TEST nvmf_filesystem_in_capsule 00:07:46.331 ************************************ 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2590822 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2590822 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2590822 ']' 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.331 22:05:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:46.331 [2024-07-15 22:05:11.502701] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:07:46.331 [2024-07-15 22:05:11.502749] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.331 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.331 [2024-07-15 22:05:11.568646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.331 [2024-07-15 22:05:11.633383] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.331 [2024-07-15 22:05:11.633421] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.331 [2024-07-15 22:05:11.633429] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.331 [2024-07-15 22:05:11.633436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.331 [2024-07-15 22:05:11.633441] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.331 [2024-07-15 22:05:11.633583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.331 [2024-07-15 22:05:11.633700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.331 [2024-07-15 22:05:11.633858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.331 [2024-07-15 22:05:11.633859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.273 [2024-07-15 22:05:12.323857] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.273 Malloc1 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.273 [2024-07-15 22:05:12.450475] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:47.273 { 00:07:47.273 "name": "Malloc1", 00:07:47.273 "aliases": [ 00:07:47.273 "6007d641-1da0-4899-9e7a-fa7ea6193f7b" 00:07:47.273 ], 00:07:47.273 "product_name": "Malloc disk", 00:07:47.273 "block_size": 512, 00:07:47.273 "num_blocks": 1048576, 00:07:47.273 "uuid": "6007d641-1da0-4899-9e7a-fa7ea6193f7b", 00:07:47.273 "assigned_rate_limits": { 00:07:47.273 "rw_ios_per_sec": 0, 00:07:47.273 "rw_mbytes_per_sec": 0, 00:07:47.273 "r_mbytes_per_sec": 0, 00:07:47.273 "w_mbytes_per_sec": 0 00:07:47.273 }, 00:07:47.273 "claimed": true, 00:07:47.273 "claim_type": "exclusive_write", 00:07:47.273 "zoned": false, 00:07:47.273 "supported_io_types": { 00:07:47.273 "read": true, 00:07:47.273 "write": true, 00:07:47.273 "unmap": true, 00:07:47.273 "flush": true, 00:07:47.273 "reset": true, 00:07:47.273 "nvme_admin": false, 00:07:47.273 "nvme_io": false, 00:07:47.273 "nvme_io_md": false, 00:07:47.273 "write_zeroes": true, 00:07:47.273 "zcopy": true, 00:07:47.273 "get_zone_info": false, 00:07:47.273 "zone_management": false, 00:07:47.273 "zone_append": false, 00:07:47.273 "compare": false, 00:07:47.273 "compare_and_write": false, 00:07:47.273 "abort": true, 00:07:47.273 "seek_hole": false, 00:07:47.273 "seek_data": false, 00:07:47.273 "copy": true, 00:07:47.273 "nvme_iov_md": false 00:07:47.273 }, 00:07:47.273 "memory_domains": [ 00:07:47.273 { 00:07:47.273 "dma_device_id": "system", 00:07:47.273 "dma_device_type": 1 00:07:47.273 }, 00:07:47.273 { 00:07:47.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.273 "dma_device_type": 2 00:07:47.273 } 00:07:47.273 ], 00:07:47.273 "driver_specific": {} 00:07:47.273 } 00:07:47.273 ]' 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:47.273 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:47.274 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:47.274 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:47.274 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:47.274 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:47.274 22:05:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:49.184 22:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:49.184 22:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:49.184 22:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:49.184 22:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:49.184 22:05:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:51.095 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:51.355 22:05:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:52.298 22:05:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:52.298 22:05:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:52.298 22:05:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:52.298 22:05:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.298 22:05:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.298 ************************************ 00:07:52.298 START TEST filesystem_in_capsule_ext4 00:07:52.298 ************************************ 00:07:52.298 22:05:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:52.298 22:05:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:52.298 22:05:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:52.298 22:05:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:52.298 22:05:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:52.298 22:05:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:52.298 22:05:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:52.298 22:05:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:52.298 22:05:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:52.298 22:05:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:52.298 22:05:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:52.298 mke2fs 1.46.5 (30-Dec-2021) 00:07:52.298 Discarding device blocks: 0/522240 done 00:07:52.298 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:52.298 Filesystem UUID: 3d3b4cbb-8617-4023-813b-38325f8c263b 00:07:52.298 Superblock backups stored on blocks: 00:07:52.298 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:52.298 00:07:52.298 Allocating group tables: 0/64 done 00:07:52.298 Writing inode tables: 0/64 done 00:07:53.239 Creating journal (8192 blocks): done 00:07:53.239 Writing superblocks and filesystem accounting information: 0/64 done 00:07:53.239 00:07:53.239 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:53.239 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:53.498 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:53.498 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:53.498 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:53.498 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:53.498 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:53.498 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2590822 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:53.758 00:07:53.758 real 0m1.350s 00:07:53.758 user 0m0.023s 00:07:53.758 sys 0m0.074s 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:53.758 ************************************ 00:07:53.758 END TEST filesystem_in_capsule_ext4 00:07:53.758 ************************************ 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.758 ************************************ 00:07:53.758 START TEST filesystem_in_capsule_btrfs 00:07:53.758 ************************************ 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:53.758 22:05:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:54.017 btrfs-progs v6.6.2 00:07:54.017 See https://btrfs.readthedocs.io for more information. 00:07:54.017 00:07:54.017 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:54.017 NOTE: several default settings have changed in version 5.15, please make sure 00:07:54.017 this does not affect your deployments: 00:07:54.017 - DUP for metadata (-m dup) 00:07:54.017 - enabled no-holes (-O no-holes) 00:07:54.017 - enabled free-space-tree (-R free-space-tree) 00:07:54.017 00:07:54.017 Label: (null) 00:07:54.017 UUID: 49e2ec7a-542f-4eb3-913a-10c7bbc7b975 00:07:54.017 Node size: 16384 00:07:54.017 Sector size: 4096 00:07:54.017 Filesystem size: 510.00MiB 00:07:54.017 Block group profiles: 00:07:54.017 Data: single 8.00MiB 00:07:54.017 Metadata: DUP 32.00MiB 00:07:54.017 System: DUP 8.00MiB 00:07:54.017 SSD detected: yes 00:07:54.017 Zoned device: no 00:07:54.017 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:54.017 Runtime features: free-space-tree 00:07:54.017 Checksum: crc32c 00:07:54.017 Number of devices: 1 00:07:54.017 Devices: 00:07:54.017 ID SIZE PATH 00:07:54.017 1 510.00MiB /dev/nvme0n1p1 00:07:54.017 00:07:54.017 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:54.017 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.586 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.586 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:54.586 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:54.586 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:54.586 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:54.586 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.586 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2590822 00:07:54.586 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.586 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.586 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.586 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.586 00:07:54.586 real 0m0.956s 00:07:54.586 user 0m0.034s 00:07:54.586 sys 0m0.125s 00:07:54.587 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.587 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:54.587 ************************************ 00:07:54.587 END TEST filesystem_in_capsule_btrfs 00:07:54.587 ************************************ 00:07:54.847 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:54.847 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:54.847 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:54.847 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.847 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.847 ************************************ 00:07:54.847 START TEST filesystem_in_capsule_xfs 00:07:54.847 ************************************ 00:07:54.847 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:54.847 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:54.847 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:54.847 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:54.847 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:54.847 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:54.847 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:54.847 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:54.847 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:54.847 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:54.847 22:05:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:54.847 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:54.847 = sectsz=512 attr=2, projid32bit=1 00:07:54.847 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:54.847 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:54.847 data = bsize=4096 blocks=130560, imaxpct=25 00:07:54.847 = sunit=0 swidth=0 blks 00:07:54.847 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:54.847 log =internal log bsize=4096 blocks=16384, version=2 00:07:54.847 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:54.847 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:55.788 Discarding blocks...Done. 00:07:55.788 22:05:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:55.788 22:05:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:58.414 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:58.414 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:58.414 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:58.414 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:58.414 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:58.414 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:58.414 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2590822 00:07:58.414 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:58.414 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:58.414 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:58.414 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:58.414 00:07:58.414 real 0m3.539s 00:07:58.414 user 0m0.025s 00:07:58.414 sys 0m0.079s 00:07:58.414 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.414 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:58.414 ************************************ 00:07:58.414 END TEST filesystem_in_capsule_xfs 00:07:58.414 ************************************ 00:07:58.414 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:58.414 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:58.414 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:58.674 22:05:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:58.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2590822 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2590822 ']' 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2590822 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2590822 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2590822' 00:07:58.934 killing process with pid 2590822 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2590822 00:07:58.934 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2590822 00:07:59.195 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:59.195 00:07:59.195 real 0m13.007s 00:07:59.195 user 0m51.282s 00:07:59.195 sys 0m1.217s 00:07:59.195 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.195 22:05:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.195 ************************************ 00:07:59.195 END TEST nvmf_filesystem_in_capsule 00:07:59.195 ************************************ 00:07:59.195 22:05:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:59.195 22:05:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:59.195 22:05:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:59.195 22:05:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:59.195 22:05:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:59.195 22:05:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:59.195 22:05:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:59.195 22:05:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:59.195 rmmod nvme_tcp 00:07:59.455 rmmod nvme_fabrics 00:07:59.455 rmmod nvme_keyring 00:07:59.455 22:05:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:59.455 22:05:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:59.455 22:05:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:59.455 22:05:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:59.455 22:05:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:59.455 22:05:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:59.455 22:05:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:59.455 22:05:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:59.455 22:05:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:59.455 22:05:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.455 22:05:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.455 22:05:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.370 22:05:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:01.370 00:08:01.370 real 0m38.428s 00:08:01.370 user 1m55.643s 00:08:01.370 sys 0m7.917s 00:08:01.370 22:05:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.370 22:05:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.370 ************************************ 00:08:01.370 END TEST nvmf_filesystem 00:08:01.370 ************************************ 00:08:01.370 22:05:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:01.370 22:05:26 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:01.370 22:05:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:01.370 22:05:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.370 22:05:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:01.632 ************************************ 00:08:01.632 START TEST nvmf_target_discovery 00:08:01.632 ************************************ 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:01.632 * Looking for test storage... 00:08:01.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:01.632 22:05:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:08.238 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:08.238 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:08.238 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:08.238 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.238 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.499 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.499 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.499 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:08.499 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.499 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.499 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.499 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:08.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:08:08.759 00:08:08.759 --- 10.0.0.2 ping statistics --- 00:08:08.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.759 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:08:08.759 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.433 ms 00:08:08.759 00:08:08.759 --- 10.0.0.1 ping statistics --- 00:08:08.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.759 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:08:08.759 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.759 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:08.759 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:08.759 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.759 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:08.759 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:08.759 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.759 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:08.759 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:08.759 22:05:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:08.759 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:08.760 22:05:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:08.760 22:05:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.760 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:08.760 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2597725 00:08:08.760 22:05:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2597725 00:08:08.760 22:05:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2597725 ']' 00:08:08.760 22:05:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.760 22:05:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.760 22:05:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.760 22:05:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.760 22:05:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.760 [2024-07-15 22:05:33.925001] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:08:08.760 [2024-07-15 22:05:33.925063] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.760 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.760 [2024-07-15 22:05:33.994119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.760 [2024-07-15 22:05:34.060566] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.760 [2024-07-15 22:05:34.060599] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.760 [2024-07-15 22:05:34.060606] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.760 [2024-07-15 22:05:34.060613] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.760 [2024-07-15 22:05:34.060618] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.760 [2024-07-15 22:05:34.060760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.760 [2024-07-15 22:05:34.060880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.760 [2024-07-15 22:05:34.061035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.760 [2024-07-15 22:05:34.061037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 [2024-07-15 22:05:34.749793] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 Null1 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 [2024-07-15 22:05:34.810080] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 Null2 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 Null3 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:09.701 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.702 Null4 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.702 22:05:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:08:09.962 00:08:09.962 Discovery Log Number of Records 6, Generation counter 6 00:08:09.962 =====Discovery Log Entry 0====== 00:08:09.962 trtype: tcp 00:08:09.962 adrfam: ipv4 00:08:09.962 subtype: current discovery subsystem 00:08:09.962 treq: not required 00:08:09.962 portid: 0 00:08:09.962 trsvcid: 4420 00:08:09.962 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:09.962 traddr: 10.0.0.2 00:08:09.962 eflags: explicit discovery connections, duplicate discovery information 00:08:09.962 sectype: none 00:08:09.962 =====Discovery Log Entry 1====== 00:08:09.962 trtype: tcp 00:08:09.962 adrfam: ipv4 00:08:09.962 subtype: nvme subsystem 00:08:09.962 treq: not required 00:08:09.962 portid: 0 00:08:09.962 trsvcid: 4420 00:08:09.962 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:09.962 traddr: 10.0.0.2 00:08:09.962 eflags: none 00:08:09.962 sectype: none 00:08:09.962 =====Discovery Log Entry 2====== 00:08:09.962 trtype: tcp 00:08:09.962 adrfam: ipv4 00:08:09.962 subtype: nvme subsystem 00:08:09.962 treq: not required 00:08:09.963 portid: 0 00:08:09.963 trsvcid: 4420 00:08:09.963 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:09.963 traddr: 10.0.0.2 00:08:09.963 eflags: none 00:08:09.963 sectype: none 00:08:09.963 =====Discovery Log Entry 3====== 00:08:09.963 trtype: tcp 00:08:09.963 adrfam: ipv4 00:08:09.963 subtype: nvme subsystem 00:08:09.963 treq: not required 00:08:09.963 portid: 0 00:08:09.963 trsvcid: 4420 00:08:09.963 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:09.963 traddr: 10.0.0.2 00:08:09.963 eflags: none 00:08:09.963 sectype: none 00:08:09.963 =====Discovery Log Entry 4====== 00:08:09.963 trtype: tcp 00:08:09.963 adrfam: ipv4 00:08:09.963 subtype: nvme subsystem 00:08:09.963 treq: not required 00:08:09.963 portid: 0 00:08:09.963 trsvcid: 4420 00:08:09.963 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:09.963 traddr: 10.0.0.2 00:08:09.963 eflags: none 00:08:09.963 sectype: none 00:08:09.963 =====Discovery Log Entry 5====== 00:08:09.963 trtype: tcp 00:08:09.963 adrfam: ipv4 00:08:09.963 subtype: discovery subsystem referral 00:08:09.963 treq: not required 00:08:09.963 portid: 0 00:08:09.963 trsvcid: 4430 00:08:09.963 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:09.963 traddr: 10.0.0.2 00:08:09.963 eflags: none 00:08:09.963 sectype: none 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:09.963 Perform nvmf subsystem discovery via RPC 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.963 [ 00:08:09.963 { 00:08:09.963 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:09.963 "subtype": "Discovery", 00:08:09.963 "listen_addresses": [ 00:08:09.963 { 00:08:09.963 "trtype": "TCP", 00:08:09.963 "adrfam": "IPv4", 00:08:09.963 "traddr": "10.0.0.2", 00:08:09.963 "trsvcid": "4420" 00:08:09.963 } 00:08:09.963 ], 00:08:09.963 "allow_any_host": true, 00:08:09.963 "hosts": [] 00:08:09.963 }, 00:08:09.963 { 00:08:09.963 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:09.963 "subtype": "NVMe", 00:08:09.963 "listen_addresses": [ 00:08:09.963 { 00:08:09.963 "trtype": "TCP", 00:08:09.963 "adrfam": "IPv4", 00:08:09.963 "traddr": "10.0.0.2", 00:08:09.963 "trsvcid": "4420" 00:08:09.963 } 00:08:09.963 ], 00:08:09.963 "allow_any_host": true, 00:08:09.963 "hosts": [], 00:08:09.963 "serial_number": "SPDK00000000000001", 00:08:09.963 "model_number": "SPDK bdev Controller", 00:08:09.963 "max_namespaces": 32, 00:08:09.963 "min_cntlid": 1, 00:08:09.963 "max_cntlid": 65519, 00:08:09.963 "namespaces": [ 00:08:09.963 { 00:08:09.963 "nsid": 1, 00:08:09.963 "bdev_name": "Null1", 00:08:09.963 "name": "Null1", 00:08:09.963 "nguid": "3862EA8BE7464A33A3DD9A77C04528B8", 00:08:09.963 "uuid": "3862ea8b-e746-4a33-a3dd-9a77c04528b8" 00:08:09.963 } 00:08:09.963 ] 00:08:09.963 }, 00:08:09.963 { 00:08:09.963 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:09.963 "subtype": "NVMe", 00:08:09.963 "listen_addresses": [ 00:08:09.963 { 00:08:09.963 "trtype": "TCP", 00:08:09.963 "adrfam": "IPv4", 00:08:09.963 "traddr": "10.0.0.2", 00:08:09.963 "trsvcid": "4420" 00:08:09.963 } 00:08:09.963 ], 00:08:09.963 "allow_any_host": true, 00:08:09.963 "hosts": [], 00:08:09.963 "serial_number": "SPDK00000000000002", 00:08:09.963 "model_number": "SPDK bdev Controller", 00:08:09.963 "max_namespaces": 32, 00:08:09.963 "min_cntlid": 1, 00:08:09.963 "max_cntlid": 65519, 00:08:09.963 "namespaces": [ 00:08:09.963 { 00:08:09.963 "nsid": 1, 00:08:09.963 "bdev_name": "Null2", 00:08:09.963 "name": "Null2", 00:08:09.963 "nguid": "C7D0B3C4C545491BA2390235405E2142", 00:08:09.963 "uuid": "c7d0b3c4-c545-491b-a239-0235405e2142" 00:08:09.963 } 00:08:09.963 ] 00:08:09.963 }, 00:08:09.963 { 00:08:09.963 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:09.963 "subtype": "NVMe", 00:08:09.963 "listen_addresses": [ 00:08:09.963 { 00:08:09.963 "trtype": "TCP", 00:08:09.963 "adrfam": "IPv4", 00:08:09.963 "traddr": "10.0.0.2", 00:08:09.963 "trsvcid": "4420" 00:08:09.963 } 00:08:09.963 ], 00:08:09.963 "allow_any_host": true, 00:08:09.963 "hosts": [], 00:08:09.963 "serial_number": "SPDK00000000000003", 00:08:09.963 "model_number": "SPDK bdev Controller", 00:08:09.963 "max_namespaces": 32, 00:08:09.963 "min_cntlid": 1, 00:08:09.963 "max_cntlid": 65519, 00:08:09.963 "namespaces": [ 00:08:09.963 { 00:08:09.963 "nsid": 1, 00:08:09.963 "bdev_name": "Null3", 00:08:09.963 "name": "Null3", 00:08:09.963 "nguid": "14760477405A4C43BFEBF2082499C5C0", 00:08:09.963 "uuid": "14760477-405a-4c43-bfeb-f2082499c5c0" 00:08:09.963 } 00:08:09.963 ] 00:08:09.963 }, 00:08:09.963 { 00:08:09.963 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:09.963 "subtype": "NVMe", 00:08:09.963 "listen_addresses": [ 00:08:09.963 { 00:08:09.963 "trtype": "TCP", 00:08:09.963 "adrfam": "IPv4", 00:08:09.963 "traddr": "10.0.0.2", 00:08:09.963 "trsvcid": "4420" 00:08:09.963 } 00:08:09.963 ], 00:08:09.963 "allow_any_host": true, 00:08:09.963 "hosts": [], 00:08:09.963 "serial_number": "SPDK00000000000004", 00:08:09.963 "model_number": "SPDK bdev Controller", 00:08:09.963 "max_namespaces": 32, 00:08:09.963 "min_cntlid": 1, 00:08:09.963 "max_cntlid": 65519, 00:08:09.963 "namespaces": [ 00:08:09.963 { 00:08:09.963 "nsid": 1, 00:08:09.963 "bdev_name": "Null4", 00:08:09.963 "name": "Null4", 00:08:09.963 "nguid": "4178D9BCBA1E4C0496D38523CFB7DE67", 00:08:09.963 "uuid": "4178d9bc-ba1e-4c04-96d3-8523cfb7de67" 00:08:09.963 } 00:08:09.963 ] 00:08:09.963 } 00:08:09.963 ] 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:09.963 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:09.964 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.964 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.964 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.964 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:09.964 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.964 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.964 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.964 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:09.964 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.964 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.964 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.964 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:09.964 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:09.964 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.964 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.964 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:10.225 rmmod nvme_tcp 00:08:10.225 rmmod nvme_fabrics 00:08:10.225 rmmod nvme_keyring 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2597725 ']' 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2597725 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2597725 ']' 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2597725 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2597725 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2597725' 00:08:10.225 killing process with pid 2597725 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2597725 00:08:10.225 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2597725 00:08:10.487 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:10.487 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:10.487 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:10.487 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.487 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:10.487 22:05:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.487 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.487 22:05:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.401 22:05:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:12.401 00:08:12.401 real 0m10.920s 00:08:12.401 user 0m8.171s 00:08:12.401 sys 0m5.558s 00:08:12.401 22:05:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.401 22:05:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.401 ************************************ 00:08:12.401 END TEST nvmf_target_discovery 00:08:12.401 ************************************ 00:08:12.401 22:05:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:12.401 22:05:37 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:12.401 22:05:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:12.401 22:05:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.401 22:05:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:12.401 ************************************ 00:08:12.401 START TEST nvmf_referrals 00:08:12.401 ************************************ 00:08:12.401 22:05:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:12.662 * Looking for test storage... 00:08:12.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:12.662 22:05:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:12.663 22:05:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:12.663 22:05:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:12.663 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:12.663 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.663 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:12.663 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:12.663 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:12.663 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.663 22:05:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.663 22:05:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.663 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:12.663 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:12.663 22:05:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:12.663 22:05:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:20.819 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:20.819 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:20.819 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:20.819 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:20.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:08:20.819 00:08:20.819 --- 10.0.0.2 ping statistics --- 00:08:20.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.819 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:08:20.819 22:05:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:20.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:20.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.374 ms 00:08:20.819 00:08:20.819 --- 10.0.0.1 ping statistics --- 00:08:20.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.819 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2602185 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2602185 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2602185 ']' 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:20.819 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.819 [2024-07-15 22:05:45.113253] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:08:20.820 [2024-07-15 22:05:45.113324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.820 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.820 [2024-07-15 22:05:45.184376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.820 [2024-07-15 22:05:45.261455] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.820 [2024-07-15 22:05:45.261493] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.820 [2024-07-15 22:05:45.261501] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.820 [2024-07-15 22:05:45.261508] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.820 [2024-07-15 22:05:45.261513] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.820 [2024-07-15 22:05:45.261655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.820 [2024-07-15 22:05:45.261773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.820 [2024-07-15 22:05:45.261930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.820 [2024-07-15 22:05:45.261931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.820 [2024-07-15 22:05:45.940756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.820 [2024-07-15 22:05:45.956914] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.820 22:05:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:20.820 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.080 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:21.341 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:21.624 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:21.624 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:21.624 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:21.624 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:21.624 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:21.624 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.624 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:21.624 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:21.624 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:21.624 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:21.624 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:21.624 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.624 22:05:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.889 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:22.151 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:22.151 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:22.151 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:22.151 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:22.151 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.151 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:22.151 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:22.151 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:22.151 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.151 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.151 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.151 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:22.151 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:22.151 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.151 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.151 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:22.412 rmmod nvme_tcp 00:08:22.412 rmmod nvme_fabrics 00:08:22.412 rmmod nvme_keyring 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2602185 ']' 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2602185 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2602185 ']' 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2602185 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:22.412 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2602185 00:08:22.673 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:22.673 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:22.673 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2602185' 00:08:22.673 killing process with pid 2602185 00:08:22.673 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2602185 00:08:22.673 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2602185 00:08:22.673 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:22.673 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:22.673 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:22.673 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:22.673 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:22.673 22:05:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.673 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.673 22:05:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.236 22:05:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:25.236 00:08:25.236 real 0m12.229s 00:08:25.236 user 0m13.382s 00:08:25.236 sys 0m6.087s 00:08:25.236 22:05:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.236 22:05:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.236 ************************************ 00:08:25.236 END TEST nvmf_referrals 00:08:25.236 ************************************ 00:08:25.236 22:05:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:25.236 22:05:49 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:25.236 22:05:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:25.236 22:05:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.236 22:05:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:25.236 ************************************ 00:08:25.236 START TEST nvmf_connect_disconnect 00:08:25.236 ************************************ 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:25.236 * Looking for test storage... 00:08:25.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:25.236 22:05:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:31.822 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.822 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:31.823 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:31.823 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:31.823 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.823 22:05:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.823 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.823 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.823 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:08:31.823 00:08:31.823 --- 10.0.0.2 ping statistics --- 00:08:31.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.823 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:08:31.823 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.458 ms 00:08:32.084 00:08:32.084 --- 10.0.0.1 ping statistics --- 00:08:32.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.084 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2606992 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2606992 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2606992 ']' 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.084 22:05:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.084 [2024-07-15 22:05:57.260487] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:08:32.084 [2024-07-15 22:05:57.260558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.084 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.084 [2024-07-15 22:05:57.334014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.343 [2024-07-15 22:05:57.409546] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.343 [2024-07-15 22:05:57.409583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.343 [2024-07-15 22:05:57.409591] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.343 [2024-07-15 22:05:57.409598] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.343 [2024-07-15 22:05:57.409604] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.343 [2024-07-15 22:05:57.409758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.343 [2024-07-15 22:05:57.409872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.343 [2024-07-15 22:05:57.410027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.343 [2024-07-15 22:05:57.410028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.912 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.912 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:32.912 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.912 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.912 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.912 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.912 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:32.912 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.912 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.912 [2024-07-15 22:05:58.084736] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.912 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.912 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:32.912 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.912 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.912 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.912 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:32.913 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:32.913 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.913 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.913 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.913 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:32.913 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.913 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.913 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.913 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.913 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.913 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.913 [2024-07-15 22:05:58.144119] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.913 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.913 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:32.913 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:32.913 22:05:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:37.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:51.208 rmmod nvme_tcp 00:08:51.208 rmmod nvme_fabrics 00:08:51.208 rmmod nvme_keyring 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2606992 ']' 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2606992 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2606992 ']' 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2606992 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:51.208 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2606992 00:08:51.468 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:51.468 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:51.468 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2606992' 00:08:51.468 killing process with pid 2606992 00:08:51.468 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2606992 00:08:51.468 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2606992 00:08:51.468 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:51.468 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:51.468 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:51.468 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:51.468 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:51.468 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.468 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.468 22:06:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.046 22:06:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:54.046 00:08:54.046 real 0m28.756s 00:08:54.046 user 1m18.943s 00:08:54.046 sys 0m6.410s 00:08:54.046 22:06:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.046 22:06:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:54.046 ************************************ 00:08:54.046 END TEST nvmf_connect_disconnect 00:08:54.046 ************************************ 00:08:54.046 22:06:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:54.046 22:06:18 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:54.046 22:06:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:54.046 22:06:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.046 22:06:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:54.046 ************************************ 00:08:54.046 START TEST nvmf_multitarget 00:08:54.046 ************************************ 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:54.046 * Looking for test storage... 00:08:54.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:54.046 22:06:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:00.639 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:00.639 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:00.639 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:00.639 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.639 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.640 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.640 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.640 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:00.640 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.901 22:06:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:00.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:09:00.901 00:09:00.901 --- 10.0.0.2 ping statistics --- 00:09:00.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.901 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:09:00.901 00:09:00.901 --- 10.0.0.1 ping statistics --- 00:09:00.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.901 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2615563 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2615563 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2615563 ']' 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:00.901 22:06:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:00.901 [2024-07-15 22:06:26.158824] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:09:00.901 [2024-07-15 22:06:26.158887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.901 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.162 [2024-07-15 22:06:26.231064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.162 [2024-07-15 22:06:26.308632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.162 [2024-07-15 22:06:26.308670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.162 [2024-07-15 22:06:26.308678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.162 [2024-07-15 22:06:26.308684] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.162 [2024-07-15 22:06:26.308689] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.162 [2024-07-15 22:06:26.308828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.162 [2024-07-15 22:06:26.308945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.162 [2024-07-15 22:06:26.309102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.162 [2024-07-15 22:06:26.309103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.734 22:06:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.734 22:06:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:01.734 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:01.734 22:06:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:01.734 22:06:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:01.734 22:06:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.734 22:06:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:01.734 22:06:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:01.734 22:06:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:01.994 22:06:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:01.994 22:06:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:01.994 "nvmf_tgt_1" 00:09:01.994 22:06:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:01.994 "nvmf_tgt_2" 00:09:01.994 22:06:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:01.994 22:06:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:02.254 22:06:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:02.254 22:06:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:02.254 true 00:09:02.254 22:06:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:02.254 true 00:09:02.254 22:06:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:02.254 22:06:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:02.514 rmmod nvme_tcp 00:09:02.514 rmmod nvme_fabrics 00:09:02.514 rmmod nvme_keyring 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2615563 ']' 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2615563 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2615563 ']' 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2615563 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2615563 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2615563' 00:09:02.514 killing process with pid 2615563 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2615563 00:09:02.514 22:06:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2615563 00:09:02.775 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:02.775 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:02.775 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:02.775 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:02.775 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:02.775 22:06:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.775 22:06:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.775 22:06:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.690 22:06:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:04.690 00:09:04.690 real 0m11.124s 00:09:04.690 user 0m9.151s 00:09:04.690 sys 0m5.672s 00:09:04.690 22:06:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.690 22:06:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:04.690 ************************************ 00:09:04.690 END TEST nvmf_multitarget 00:09:04.690 ************************************ 00:09:04.952 22:06:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:04.952 22:06:30 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:04.952 22:06:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:04.952 22:06:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.952 22:06:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:04.952 ************************************ 00:09:04.952 START TEST nvmf_rpc 00:09:04.952 ************************************ 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:04.952 * Looking for test storage... 00:09:04.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:04.952 22:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.170 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.170 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:13.170 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:13.170 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:13.170 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:13.170 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:13.170 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:13.170 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:13.170 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:13.170 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:13.170 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:13.170 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:13.170 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:13.170 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:13.170 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:13.171 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:13.171 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:13.171 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:13.171 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:13.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:09:13.171 00:09:13.171 --- 10.0.0.2 ping statistics --- 00:09:13.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.171 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:09:13.171 00:09:13.171 --- 10.0.0.1 ping statistics --- 00:09:13.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.171 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2620248 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2620248 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2620248 ']' 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:13.171 22:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.171 [2024-07-15 22:06:37.500915] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:09:13.171 [2024-07-15 22:06:37.500984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.171 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.171 [2024-07-15 22:06:37.572646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:13.171 [2024-07-15 22:06:37.647104] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.171 [2024-07-15 22:06:37.647144] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.171 [2024-07-15 22:06:37.647152] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.171 [2024-07-15 22:06:37.647158] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.171 [2024-07-15 22:06:37.647163] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.171 [2024-07-15 22:06:37.647300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.171 [2024-07-15 22:06:37.647489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.171 [2024-07-15 22:06:37.647647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.171 [2024-07-15 22:06:37.647648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.171 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.171 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:13.171 22:06:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:13.171 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:13.171 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.171 22:06:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.171 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:13.172 "tick_rate": 2400000000, 00:09:13.172 "poll_groups": [ 00:09:13.172 { 00:09:13.172 "name": "nvmf_tgt_poll_group_000", 00:09:13.172 "admin_qpairs": 0, 00:09:13.172 "io_qpairs": 0, 00:09:13.172 "current_admin_qpairs": 0, 00:09:13.172 "current_io_qpairs": 0, 00:09:13.172 "pending_bdev_io": 0, 00:09:13.172 "completed_nvme_io": 0, 00:09:13.172 "transports": [] 00:09:13.172 }, 00:09:13.172 { 00:09:13.172 "name": "nvmf_tgt_poll_group_001", 00:09:13.172 "admin_qpairs": 0, 00:09:13.172 "io_qpairs": 0, 00:09:13.172 "current_admin_qpairs": 0, 00:09:13.172 "current_io_qpairs": 0, 00:09:13.172 "pending_bdev_io": 0, 00:09:13.172 "completed_nvme_io": 0, 00:09:13.172 "transports": [] 00:09:13.172 }, 00:09:13.172 { 00:09:13.172 "name": "nvmf_tgt_poll_group_002", 00:09:13.172 "admin_qpairs": 0, 00:09:13.172 "io_qpairs": 0, 00:09:13.172 "current_admin_qpairs": 0, 00:09:13.172 "current_io_qpairs": 0, 00:09:13.172 "pending_bdev_io": 0, 00:09:13.172 "completed_nvme_io": 0, 00:09:13.172 "transports": [] 00:09:13.172 }, 00:09:13.172 { 00:09:13.172 "name": "nvmf_tgt_poll_group_003", 00:09:13.172 "admin_qpairs": 0, 00:09:13.172 "io_qpairs": 0, 00:09:13.172 "current_admin_qpairs": 0, 00:09:13.172 "current_io_qpairs": 0, 00:09:13.172 "pending_bdev_io": 0, 00:09:13.172 "completed_nvme_io": 0, 00:09:13.172 "transports": [] 00:09:13.172 } 00:09:13.172 ] 00:09:13.172 }' 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.172 [2024-07-15 22:06:38.446188] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:13.172 "tick_rate": 2400000000, 00:09:13.172 "poll_groups": [ 00:09:13.172 { 00:09:13.172 "name": "nvmf_tgt_poll_group_000", 00:09:13.172 "admin_qpairs": 0, 00:09:13.172 "io_qpairs": 0, 00:09:13.172 "current_admin_qpairs": 0, 00:09:13.172 "current_io_qpairs": 0, 00:09:13.172 "pending_bdev_io": 0, 00:09:13.172 "completed_nvme_io": 0, 00:09:13.172 "transports": [ 00:09:13.172 { 00:09:13.172 "trtype": "TCP" 00:09:13.172 } 00:09:13.172 ] 00:09:13.172 }, 00:09:13.172 { 00:09:13.172 "name": "nvmf_tgt_poll_group_001", 00:09:13.172 "admin_qpairs": 0, 00:09:13.172 "io_qpairs": 0, 00:09:13.172 "current_admin_qpairs": 0, 00:09:13.172 "current_io_qpairs": 0, 00:09:13.172 "pending_bdev_io": 0, 00:09:13.172 "completed_nvme_io": 0, 00:09:13.172 "transports": [ 00:09:13.172 { 00:09:13.172 "trtype": "TCP" 00:09:13.172 } 00:09:13.172 ] 00:09:13.172 }, 00:09:13.172 { 00:09:13.172 "name": "nvmf_tgt_poll_group_002", 00:09:13.172 "admin_qpairs": 0, 00:09:13.172 "io_qpairs": 0, 00:09:13.172 "current_admin_qpairs": 0, 00:09:13.172 "current_io_qpairs": 0, 00:09:13.172 "pending_bdev_io": 0, 00:09:13.172 "completed_nvme_io": 0, 00:09:13.172 "transports": [ 00:09:13.172 { 00:09:13.172 "trtype": "TCP" 00:09:13.172 } 00:09:13.172 ] 00:09:13.172 }, 00:09:13.172 { 00:09:13.172 "name": "nvmf_tgt_poll_group_003", 00:09:13.172 "admin_qpairs": 0, 00:09:13.172 "io_qpairs": 0, 00:09:13.172 "current_admin_qpairs": 0, 00:09:13.172 "current_io_qpairs": 0, 00:09:13.172 "pending_bdev_io": 0, 00:09:13.172 "completed_nvme_io": 0, 00:09:13.172 "transports": [ 00:09:13.172 { 00:09:13.172 "trtype": "TCP" 00:09:13.172 } 00:09:13.172 ] 00:09:13.172 } 00:09:13.172 ] 00:09:13.172 }' 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:13.172 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.434 Malloc1 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.434 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.434 [2024-07-15 22:06:38.629916] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:13.435 [2024-07-15 22:06:38.656728] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:13.435 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:13.435 could not add new controller: failed to write to nvme-fabrics device 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.435 22:06:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:15.346 22:06:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:15.346 22:06:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:15.346 22:06:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.346 22:06:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:15.346 22:06:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:17.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:17.260 [2024-07-15 22:06:42.343438] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:17.260 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:17.260 could not add new controller: failed to write to nvme-fabrics device 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.260 22:06:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:18.646 22:06:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.646 22:06:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:18.646 22:06:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.646 22:06:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:18.646 22:06:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:21.190 22:06:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:21.190 22:06:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:21.190 22:06:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.190 22:06:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:21.190 22:06:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.190 22:06:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:21.190 22:06:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.190 [2024-07-15 22:06:46.106311] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.190 22:06:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:22.576 22:06:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:22.576 22:06:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:22.576 22:06:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:22.576 22:06:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:22.576 22:06:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.560 [2024-07-15 22:06:49.837240] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.560 22:06:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:26.475 22:06:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:26.475 22:06:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:26.475 22:06:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:26.475 22:06:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:26.475 22:06:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:28.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.389 [2024-07-15 22:06:53.552195] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.389 22:06:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:29.769 22:06:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:29.770 22:06:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:29.770 22:06:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:29.770 22:06:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:29.770 22:06:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:32.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.311 [2024-07-15 22:06:57.263463] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.311 22:06:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:33.721 22:06:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:33.721 22:06:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:33.721 22:06:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:33.721 22:06:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:33.721 22:06:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:35.632 22:07:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:35.633 22:07:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:35.633 22:07:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.633 22:07:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:35.633 22:07:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.633 22:07:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:35.633 22:07:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:35.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.894 22:07:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:35.894 22:07:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:35.894 22:07:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:35.894 22:07:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.894 [2024-07-15 22:07:01.065787] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.894 22:07:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:37.277 22:07:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:37.277 22:07:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:37.277 22:07:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:37.277 22:07:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:37.278 22:07:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:39.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 [2024-07-15 22:07:04.783035] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 [2024-07-15 22:07:04.839155] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 [2024-07-15 22:07:04.899345] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.821 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.822 [2024-07-15 22:07:04.959545] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.822 22:07:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.822 [2024-07-15 22:07:05.015723] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:39.822 "tick_rate": 2400000000, 00:09:39.822 "poll_groups": [ 00:09:39.822 { 00:09:39.822 "name": "nvmf_tgt_poll_group_000", 00:09:39.822 "admin_qpairs": 0, 00:09:39.822 "io_qpairs": 224, 00:09:39.822 "current_admin_qpairs": 0, 00:09:39.822 "current_io_qpairs": 0, 00:09:39.822 "pending_bdev_io": 0, 00:09:39.822 "completed_nvme_io": 229, 00:09:39.822 "transports": [ 00:09:39.822 { 00:09:39.822 "trtype": "TCP" 00:09:39.822 } 00:09:39.822 ] 00:09:39.822 }, 00:09:39.822 { 00:09:39.822 "name": "nvmf_tgt_poll_group_001", 00:09:39.822 "admin_qpairs": 1, 00:09:39.822 "io_qpairs": 223, 00:09:39.822 "current_admin_qpairs": 0, 00:09:39.822 "current_io_qpairs": 0, 00:09:39.822 "pending_bdev_io": 0, 00:09:39.822 "completed_nvme_io": 272, 00:09:39.822 "transports": [ 00:09:39.822 { 00:09:39.822 "trtype": "TCP" 00:09:39.822 } 00:09:39.822 ] 00:09:39.822 }, 00:09:39.822 { 00:09:39.822 "name": "nvmf_tgt_poll_group_002", 00:09:39.822 "admin_qpairs": 6, 00:09:39.822 "io_qpairs": 218, 00:09:39.822 "current_admin_qpairs": 0, 00:09:39.822 "current_io_qpairs": 0, 00:09:39.822 "pending_bdev_io": 0, 00:09:39.822 "completed_nvme_io": 390, 00:09:39.822 "transports": [ 00:09:39.822 { 00:09:39.822 "trtype": "TCP" 00:09:39.822 } 00:09:39.822 ] 00:09:39.822 }, 00:09:39.822 { 00:09:39.822 "name": "nvmf_tgt_poll_group_003", 00:09:39.822 "admin_qpairs": 0, 00:09:39.822 "io_qpairs": 224, 00:09:39.822 "current_admin_qpairs": 0, 00:09:39.822 "current_io_qpairs": 0, 00:09:39.822 "pending_bdev_io": 0, 00:09:39.822 "completed_nvme_io": 348, 00:09:39.822 "transports": [ 00:09:39.822 { 00:09:39.822 "trtype": "TCP" 00:09:39.822 } 00:09:39.822 ] 00:09:39.822 } 00:09:39.822 ] 00:09:39.822 }' 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:39.822 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:40.084 rmmod nvme_tcp 00:09:40.084 rmmod nvme_fabrics 00:09:40.084 rmmod nvme_keyring 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2620248 ']' 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2620248 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2620248 ']' 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2620248 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2620248 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2620248' 00:09:40.084 killing process with pid 2620248 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2620248 00:09:40.084 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2620248 00:09:40.345 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:40.345 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:40.345 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:40.345 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:40.345 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:40.345 22:07:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.345 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.345 22:07:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.258 22:07:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:42.258 00:09:42.258 real 0m37.444s 00:09:42.258 user 1m53.195s 00:09:42.258 sys 0m7.177s 00:09:42.258 22:07:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:42.258 22:07:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.258 ************************************ 00:09:42.258 END TEST nvmf_rpc 00:09:42.258 ************************************ 00:09:42.258 22:07:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:42.258 22:07:07 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:42.258 22:07:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:42.258 22:07:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.258 22:07:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:42.520 ************************************ 00:09:42.520 START TEST nvmf_invalid 00:09:42.520 ************************************ 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:42.520 * Looking for test storage... 00:09:42.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:42.520 22:07:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:49.146 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.146 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:49.147 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:49.147 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.147 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.407 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.407 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.407 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:49.407 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:49.407 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.407 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:49.407 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:49.407 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:49.407 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:49.407 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:49.407 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.408 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.408 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.408 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:49.408 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.408 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.408 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:49.408 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.408 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.408 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:49.408 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:49.408 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.408 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.408 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.408 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.408 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:49.408 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:49.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:09:49.668 00:09:49.668 --- 10.0.0.2 ping statistics --- 00:09:49.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.668 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:09:49.668 00:09:49.668 --- 10.0.0.1 ping statistics --- 00:09:49.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.668 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2629963 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2629963 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2629963 ']' 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.668 22:07:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.669 22:07:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.669 22:07:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:49.669 [2024-07-15 22:07:14.870068] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:09:49.669 [2024-07-15 22:07:14.870119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.669 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.669 [2024-07-15 22:07:14.942065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.930 [2024-07-15 22:07:15.011086] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.931 [2024-07-15 22:07:15.011129] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.931 [2024-07-15 22:07:15.011136] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.931 [2024-07-15 22:07:15.011143] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.931 [2024-07-15 22:07:15.011148] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.931 [2024-07-15 22:07:15.011193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.931 [2024-07-15 22:07:15.011321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.931 [2024-07-15 22:07:15.011483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.931 [2024-07-15 22:07:15.011484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.501 22:07:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.501 22:07:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:50.501 22:07:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:50.501 22:07:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:50.501 22:07:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:50.501 22:07:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.501 22:07:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:50.501 22:07:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25688 00:09:50.762 [2024-07-15 22:07:15.829097] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:50.762 22:07:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:50.762 { 00:09:50.762 "nqn": "nqn.2016-06.io.spdk:cnode25688", 00:09:50.762 "tgt_name": "foobar", 00:09:50.762 "method": "nvmf_create_subsystem", 00:09:50.762 "req_id": 1 00:09:50.762 } 00:09:50.762 Got JSON-RPC error response 00:09:50.762 response: 00:09:50.762 { 00:09:50.762 "code": -32603, 00:09:50.762 "message": "Unable to find target foobar" 00:09:50.762 }' 00:09:50.762 22:07:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:50.762 { 00:09:50.762 "nqn": "nqn.2016-06.io.spdk:cnode25688", 00:09:50.762 "tgt_name": "foobar", 00:09:50.762 "method": "nvmf_create_subsystem", 00:09:50.762 "req_id": 1 00:09:50.762 } 00:09:50.762 Got JSON-RPC error response 00:09:50.762 response: 00:09:50.762 { 00:09:50.762 "code": -32603, 00:09:50.762 "message": "Unable to find target foobar" 00:09:50.762 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:50.762 22:07:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:50.762 22:07:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13907 00:09:50.762 [2024-07-15 22:07:16.005652] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13907: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:50.762 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:50.762 { 00:09:50.762 "nqn": "nqn.2016-06.io.spdk:cnode13907", 00:09:50.762 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:50.762 "method": "nvmf_create_subsystem", 00:09:50.762 "req_id": 1 00:09:50.762 } 00:09:50.762 Got JSON-RPC error response 00:09:50.762 response: 00:09:50.762 { 00:09:50.762 "code": -32602, 00:09:50.762 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:50.762 }' 00:09:50.762 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:50.762 { 00:09:50.762 "nqn": "nqn.2016-06.io.spdk:cnode13907", 00:09:50.762 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:50.762 "method": "nvmf_create_subsystem", 00:09:50.762 "req_id": 1 00:09:50.762 } 00:09:50.762 Got JSON-RPC error response 00:09:50.762 response: 00:09:50.762 { 00:09:50.762 "code": -32602, 00:09:50.762 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:50.762 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:50.762 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:50.762 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28327 00:09:51.023 [2024-07-15 22:07:16.178248] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28327: invalid model number 'SPDK_Controller' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:51.023 { 00:09:51.023 "nqn": "nqn.2016-06.io.spdk:cnode28327", 00:09:51.023 "model_number": "SPDK_Controller\u001f", 00:09:51.023 "method": "nvmf_create_subsystem", 00:09:51.023 "req_id": 1 00:09:51.023 } 00:09:51.023 Got JSON-RPC error response 00:09:51.023 response: 00:09:51.023 { 00:09:51.023 "code": -32602, 00:09:51.023 "message": "Invalid MN SPDK_Controller\u001f" 00:09:51.023 }' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:51.023 { 00:09:51.023 "nqn": "nqn.2016-06.io.spdk:cnode28327", 00:09:51.023 "model_number": "SPDK_Controller\u001f", 00:09:51.023 "method": "nvmf_create_subsystem", 00:09:51.023 "req_id": 1 00:09:51.023 } 00:09:51.023 Got JSON-RPC error response 00:09:51.023 response: 00:09:51.023 { 00:09:51.023 "code": -32602, 00:09:51.023 "message": "Invalid MN SPDK_Controller\u001f" 00:09:51.023 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.023 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.024 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ [ == \- ]] 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '[z5W]K~Y#S{B1zE[N@I)(' 00:09:51.284 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '[z5W]K~Y#S{B1zE[N@I)(' nqn.2016-06.io.spdk:cnode12900 00:09:51.284 [2024-07-15 22:07:16.515280] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12900: invalid serial number '[z5W]K~Y#S{B1zE[N@I)(' 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:51.285 { 00:09:51.285 "nqn": "nqn.2016-06.io.spdk:cnode12900", 00:09:51.285 "serial_number": "[z5W]K~Y#S{B1zE[N@I)(", 00:09:51.285 "method": "nvmf_create_subsystem", 00:09:51.285 "req_id": 1 00:09:51.285 } 00:09:51.285 Got JSON-RPC error response 00:09:51.285 response: 00:09:51.285 { 00:09:51.285 "code": -32602, 00:09:51.285 "message": "Invalid SN [z5W]K~Y#S{B1zE[N@I)(" 00:09:51.285 }' 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:51.285 { 00:09:51.285 "nqn": "nqn.2016-06.io.spdk:cnode12900", 00:09:51.285 "serial_number": "[z5W]K~Y#S{B1zE[N@I)(", 00:09:51.285 "method": "nvmf_create_subsystem", 00:09:51.285 "req_id": 1 00:09:51.285 } 00:09:51.285 Got JSON-RPC error response 00:09:51.285 response: 00:09:51.285 { 00:09:51.285 "code": -32602, 00:09:51.285 "message": "Invalid SN [z5W]K~Y#S{B1zE[N@I)(" 00:09:51.285 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.285 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:51.546 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'nc0JUuKR>09^FvyF>{f>c` e%`%Bm`|}kbzSEyR(' 00:09:51.547 22:07:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'nc0JUuKR>09^FvyF>{f>c` e%`%Bm`|}kbzSEyR(' nqn.2016-06.io.spdk:cnode17979 00:09:51.808 [2024-07-15 22:07:17.004826] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17979: invalid model number 'nc0JUuKR>09^FvyF>{f>c` e%`%Bm`|}kbzSEyR(' 00:09:51.808 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:51.808 { 00:09:51.808 "nqn": "nqn.2016-06.io.spdk:cnode17979", 00:09:51.808 "model_number": "nc0JUuKR>09^Fvy\u007fF>{f>c` e%`%Bm`|}kbzSEyR(", 00:09:51.808 "method": "nvmf_create_subsystem", 00:09:51.808 "req_id": 1 00:09:51.808 } 00:09:51.808 Got JSON-RPC error response 00:09:51.808 response: 00:09:51.808 { 00:09:51.808 "code": -32602, 00:09:51.808 "message": "Invalid MN nc0JUuKR>09^Fvy\u007fF>{f>c` e%`%Bm`|}kbzSEyR(" 00:09:51.808 }' 00:09:51.808 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:51.808 { 00:09:51.808 "nqn": "nqn.2016-06.io.spdk:cnode17979", 00:09:51.808 "model_number": "nc0JUuKR>09^Fvy\u007fF>{f>c` e%`%Bm`|}kbzSEyR(", 00:09:51.808 "method": "nvmf_create_subsystem", 00:09:51.808 "req_id": 1 00:09:51.808 } 00:09:51.808 Got JSON-RPC error response 00:09:51.808 response: 00:09:51.808 { 00:09:51.808 "code": -32602, 00:09:51.808 "message": "Invalid MN nc0JUuKR>09^Fvy\u007fF>{f>c` e%`%Bm`|}kbzSEyR(" 00:09:51.808 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:51.808 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:52.069 [2024-07-15 22:07:17.177460] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.069 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:52.069 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:52.069 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:52.069 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:52.069 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:52.069 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:52.329 [2024-07-15 22:07:17.530586] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:52.329 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:52.329 { 00:09:52.329 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:52.329 "listen_address": { 00:09:52.329 "trtype": "tcp", 00:09:52.329 "traddr": "", 00:09:52.329 "trsvcid": "4421" 00:09:52.329 }, 00:09:52.329 "method": "nvmf_subsystem_remove_listener", 00:09:52.329 "req_id": 1 00:09:52.329 } 00:09:52.329 Got JSON-RPC error response 00:09:52.329 response: 00:09:52.329 { 00:09:52.329 "code": -32602, 00:09:52.329 "message": "Invalid parameters" 00:09:52.329 }' 00:09:52.329 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:52.329 { 00:09:52.329 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:52.329 "listen_address": { 00:09:52.329 "trtype": "tcp", 00:09:52.329 "traddr": "", 00:09:52.329 "trsvcid": "4421" 00:09:52.329 }, 00:09:52.329 "method": "nvmf_subsystem_remove_listener", 00:09:52.329 "req_id": 1 00:09:52.329 } 00:09:52.329 Got JSON-RPC error response 00:09:52.329 response: 00:09:52.329 { 00:09:52.329 "code": -32602, 00:09:52.329 "message": "Invalid parameters" 00:09:52.329 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:52.329 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18648 -i 0 00:09:52.590 [2024-07-15 22:07:17.699085] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18648: invalid cntlid range [0-65519] 00:09:52.590 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:52.590 { 00:09:52.590 "nqn": "nqn.2016-06.io.spdk:cnode18648", 00:09:52.590 "min_cntlid": 0, 00:09:52.590 "method": "nvmf_create_subsystem", 00:09:52.590 "req_id": 1 00:09:52.590 } 00:09:52.590 Got JSON-RPC error response 00:09:52.590 response: 00:09:52.590 { 00:09:52.590 "code": -32602, 00:09:52.590 "message": "Invalid cntlid range [0-65519]" 00:09:52.590 }' 00:09:52.590 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:52.590 { 00:09:52.590 "nqn": "nqn.2016-06.io.spdk:cnode18648", 00:09:52.590 "min_cntlid": 0, 00:09:52.590 "method": "nvmf_create_subsystem", 00:09:52.590 "req_id": 1 00:09:52.590 } 00:09:52.590 Got JSON-RPC error response 00:09:52.590 response: 00:09:52.590 { 00:09:52.590 "code": -32602, 00:09:52.590 "message": "Invalid cntlid range [0-65519]" 00:09:52.590 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:52.590 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25821 -i 65520 00:09:52.590 [2024-07-15 22:07:17.863602] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25821: invalid cntlid range [65520-65519] 00:09:52.590 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:52.590 { 00:09:52.590 "nqn": "nqn.2016-06.io.spdk:cnode25821", 00:09:52.590 "min_cntlid": 65520, 00:09:52.590 "method": "nvmf_create_subsystem", 00:09:52.590 "req_id": 1 00:09:52.590 } 00:09:52.590 Got JSON-RPC error response 00:09:52.590 response: 00:09:52.590 { 00:09:52.590 "code": -32602, 00:09:52.590 "message": "Invalid cntlid range [65520-65519]" 00:09:52.590 }' 00:09:52.590 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:52.590 { 00:09:52.590 "nqn": "nqn.2016-06.io.spdk:cnode25821", 00:09:52.590 "min_cntlid": 65520, 00:09:52.590 "method": "nvmf_create_subsystem", 00:09:52.590 "req_id": 1 00:09:52.590 } 00:09:52.590 Got JSON-RPC error response 00:09:52.590 response: 00:09:52.590 { 00:09:52.590 "code": -32602, 00:09:52.590 "message": "Invalid cntlid range [65520-65519]" 00:09:52.590 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:52.590 22:07:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3822 -I 0 00:09:52.851 [2024-07-15 22:07:18.036189] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3822: invalid cntlid range [1-0] 00:09:52.851 22:07:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:52.851 { 00:09:52.851 "nqn": "nqn.2016-06.io.spdk:cnode3822", 00:09:52.851 "max_cntlid": 0, 00:09:52.851 "method": "nvmf_create_subsystem", 00:09:52.851 "req_id": 1 00:09:52.851 } 00:09:52.851 Got JSON-RPC error response 00:09:52.851 response: 00:09:52.851 { 00:09:52.851 "code": -32602, 00:09:52.851 "message": "Invalid cntlid range [1-0]" 00:09:52.851 }' 00:09:52.851 22:07:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:52.851 { 00:09:52.851 "nqn": "nqn.2016-06.io.spdk:cnode3822", 00:09:52.851 "max_cntlid": 0, 00:09:52.851 "method": "nvmf_create_subsystem", 00:09:52.851 "req_id": 1 00:09:52.851 } 00:09:52.851 Got JSON-RPC error response 00:09:52.851 response: 00:09:52.851 { 00:09:52.851 "code": -32602, 00:09:52.851 "message": "Invalid cntlid range [1-0]" 00:09:52.851 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:52.851 22:07:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6978 -I 65520 00:09:53.113 [2024-07-15 22:07:18.208730] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6978: invalid cntlid range [1-65520] 00:09:53.113 22:07:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:53.113 { 00:09:53.113 "nqn": "nqn.2016-06.io.spdk:cnode6978", 00:09:53.113 "max_cntlid": 65520, 00:09:53.113 "method": "nvmf_create_subsystem", 00:09:53.113 "req_id": 1 00:09:53.113 } 00:09:53.113 Got JSON-RPC error response 00:09:53.113 response: 00:09:53.113 { 00:09:53.113 "code": -32602, 00:09:53.113 "message": "Invalid cntlid range [1-65520]" 00:09:53.113 }' 00:09:53.113 22:07:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:53.113 { 00:09:53.113 "nqn": "nqn.2016-06.io.spdk:cnode6978", 00:09:53.113 "max_cntlid": 65520, 00:09:53.113 "method": "nvmf_create_subsystem", 00:09:53.113 "req_id": 1 00:09:53.113 } 00:09:53.113 Got JSON-RPC error response 00:09:53.113 response: 00:09:53.113 { 00:09:53.113 "code": -32602, 00:09:53.113 "message": "Invalid cntlid range [1-65520]" 00:09:53.113 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:53.113 22:07:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16071 -i 6 -I 5 00:09:53.113 [2024-07-15 22:07:18.381263] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16071: invalid cntlid range [6-5] 00:09:53.113 22:07:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:53.113 { 00:09:53.113 "nqn": "nqn.2016-06.io.spdk:cnode16071", 00:09:53.113 "min_cntlid": 6, 00:09:53.113 "max_cntlid": 5, 00:09:53.113 "method": "nvmf_create_subsystem", 00:09:53.113 "req_id": 1 00:09:53.113 } 00:09:53.113 Got JSON-RPC error response 00:09:53.113 response: 00:09:53.113 { 00:09:53.113 "code": -32602, 00:09:53.113 "message": "Invalid cntlid range [6-5]" 00:09:53.113 }' 00:09:53.113 22:07:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:53.113 { 00:09:53.113 "nqn": "nqn.2016-06.io.spdk:cnode16071", 00:09:53.113 "min_cntlid": 6, 00:09:53.113 "max_cntlid": 5, 00:09:53.113 "method": "nvmf_create_subsystem", 00:09:53.113 "req_id": 1 00:09:53.113 } 00:09:53.113 Got JSON-RPC error response 00:09:53.113 response: 00:09:53.113 { 00:09:53.113 "code": -32602, 00:09:53.113 "message": "Invalid cntlid range [6-5]" 00:09:53.113 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:53.113 22:07:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:53.374 { 00:09:53.374 "name": "foobar", 00:09:53.374 "method": "nvmf_delete_target", 00:09:53.374 "req_id": 1 00:09:53.374 } 00:09:53.374 Got JSON-RPC error response 00:09:53.374 response: 00:09:53.374 { 00:09:53.374 "code": -32602, 00:09:53.374 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:53.374 }' 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:53.374 { 00:09:53.374 "name": "foobar", 00:09:53.374 "method": "nvmf_delete_target", 00:09:53.374 "req_id": 1 00:09:53.374 } 00:09:53.374 Got JSON-RPC error response 00:09:53.374 response: 00:09:53.374 { 00:09:53.374 "code": -32602, 00:09:53.374 "message": "The specified target doesn't exist, cannot delete it." 00:09:53.374 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:53.374 rmmod nvme_tcp 00:09:53.374 rmmod nvme_fabrics 00:09:53.374 rmmod nvme_keyring 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2629963 ']' 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2629963 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2629963 ']' 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2629963 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2629963 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2629963' 00:09:53.374 killing process with pid 2629963 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2629963 00:09:53.374 22:07:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2629963 00:09:53.636 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:53.636 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:53.636 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:53.636 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:53.636 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:53.636 22:07:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.636 22:07:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.636 22:07:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.549 22:07:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:55.549 00:09:55.549 real 0m13.257s 00:09:55.549 user 0m19.171s 00:09:55.549 sys 0m6.207s 00:09:55.549 22:07:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:55.549 22:07:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:55.549 ************************************ 00:09:55.549 END TEST nvmf_invalid 00:09:55.549 ************************************ 00:09:55.810 22:07:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:55.810 22:07:20 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:55.810 22:07:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:55.810 22:07:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.810 22:07:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:55.810 ************************************ 00:09:55.810 START TEST nvmf_abort 00:09:55.810 ************************************ 00:09:55.810 22:07:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:55.810 * Looking for test storage... 00:09:55.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:55.810 22:07:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:02.413 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:02.413 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.413 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:02.414 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:02.414 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.414 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.674 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.674 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.674 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:02.674 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.674 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.674 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.674 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:02.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:10:02.675 00:10:02.675 --- 10.0.0.2 ping statistics --- 00:10:02.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.675 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:10:02.675 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:10:02.675 00:10:02.675 --- 10.0.0.1 ping statistics --- 00:10:02.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.675 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:10:02.675 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.675 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:02.675 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:02.675 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.675 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:02.675 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:02.675 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.675 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:02.675 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:02.675 22:07:27 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:02.675 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:02.675 22:07:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:02.675 22:07:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:02.675 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2634956 00:10:02.936 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2634956 00:10:02.936 22:07:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:02.936 22:07:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2634956 ']' 00:10:02.936 22:07:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.936 22:07:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.936 22:07:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.936 22:07:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.936 22:07:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:02.936 [2024-07-15 22:07:28.050178] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:10:02.936 [2024-07-15 22:07:28.050227] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.936 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.936 [2024-07-15 22:07:28.131150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:02.936 [2024-07-15 22:07:28.206401] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.936 [2024-07-15 22:07:28.206454] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.936 [2024-07-15 22:07:28.206462] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.936 [2024-07-15 22:07:28.206468] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.936 [2024-07-15 22:07:28.206474] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.936 [2024-07-15 22:07:28.206598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.936 [2024-07-15 22:07:28.206733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.936 [2024-07-15 22:07:28.206734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.504 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.504 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:03.504 22:07:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:03.504 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:03.504 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:03.765 [2024-07-15 22:07:28.869892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:03.765 Malloc0 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:03.765 Delay0 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:03.765 [2024-07-15 22:07:28.948472] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.765 22:07:28 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:03.765 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.765 [2024-07-15 22:07:29.069949] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:06.316 Initializing NVMe Controllers 00:10:06.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:06.316 controller IO queue size 128 less than required 00:10:06.316 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:06.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:06.316 Initialization complete. Launching workers. 00:10:06.316 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30420 00:10:06.316 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30481, failed to submit 62 00:10:06.316 success 30424, unsuccess 57, failed 0 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:06.316 rmmod nvme_tcp 00:10:06.316 rmmod nvme_fabrics 00:10:06.316 rmmod nvme_keyring 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2634956 ']' 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2634956 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2634956 ']' 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2634956 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2634956 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2634956' 00:10:06.316 killing process with pid 2634956 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2634956 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2634956 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.316 22:07:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.227 22:07:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:08.227 00:10:08.227 real 0m12.602s 00:10:08.227 user 0m13.366s 00:10:08.227 sys 0m6.087s 00:10:08.227 22:07:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.227 22:07:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.227 ************************************ 00:10:08.227 END TEST nvmf_abort 00:10:08.227 ************************************ 00:10:08.488 22:07:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:08.488 22:07:33 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:08.488 22:07:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:08.488 22:07:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.488 22:07:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:08.488 ************************************ 00:10:08.488 START TEST nvmf_ns_hotplug_stress 00:10:08.488 ************************************ 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:08.488 * Looking for test storage... 00:10:08.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:08.488 22:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:16.640 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:16.640 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:16.640 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:16.640 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:16.640 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:16.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:10:16.641 00:10:16.641 --- 10.0.0.2 ping statistics --- 00:10:16.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.641 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:10:16.641 00:10:16.641 --- 10.0.0.1 ping statistics --- 00:10:16.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.641 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2639926 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2639926 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2639926 ']' 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.641 22:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.641 [2024-07-15 22:07:40.998005] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:10:16.641 [2024-07-15 22:07:40.998072] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.641 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.641 [2024-07-15 22:07:41.062336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.641 [2024-07-15 22:07:41.147870] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.641 [2024-07-15 22:07:41.147923] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.641 [2024-07-15 22:07:41.147929] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.641 [2024-07-15 22:07:41.147934] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.641 [2024-07-15 22:07:41.147939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.641 [2024-07-15 22:07:41.148066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.641 [2024-07-15 22:07:41.148232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.641 [2024-07-15 22:07:41.148398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.641 22:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.641 22:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:16.641 22:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:16.641 22:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:16.641 22:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.641 22:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.641 22:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:16.641 22:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:16.948 [2024-07-15 22:07:42.013841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.948 22:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:16.948 22:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.209 [2024-07-15 22:07:42.354978] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.209 22:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:17.469 22:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:17.469 Malloc0 00:10:17.469 22:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:17.729 Delay0 00:10:17.729 22:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.990 22:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:17.990 NULL1 00:10:17.990 22:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:18.251 22:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2640339 00:10:18.251 22:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:18.251 22:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:18.251 22:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.251 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.643 Read completed with error (sct=0, sc=11) 00:10:19.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.643 22:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.643 22:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:19.643 22:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:19.643 true 00:10:19.643 22:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:19.643 22:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.590 22:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.850 22:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:20.850 22:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:20.850 true 00:10:20.850 22:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:20.850 22:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.110 22:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.378 22:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:21.378 22:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:21.378 true 00:10:21.378 22:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:21.378 22:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.639 22:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.639 22:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:21.639 22:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:21.899 true 00:10:21.899 22:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:21.899 22:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.159 22:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.159 22:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:22.159 22:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:22.420 true 00:10:22.420 22:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:22.420 22:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.681 22:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.681 22:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:22.681 22:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:22.942 true 00:10:22.942 22:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:22.942 22:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.942 22:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.202 22:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:23.202 22:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:23.462 true 00:10:23.462 22:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:23.462 22:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.462 22:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.723 22:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:23.723 22:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:23.983 true 00:10:23.983 22:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:23.983 22:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.983 22:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.243 22:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:24.243 22:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:24.243 true 00:10:24.504 22:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:24.504 22:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.504 22:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.765 22:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:24.765 22:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:24.765 true 00:10:24.766 22:07:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:24.766 22:07:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.156 22:07:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.156 22:07:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:26.156 22:07:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:26.156 true 00:10:26.156 22:07:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:26.156 22:07:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.097 22:07:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.097 22:07:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:27.097 22:07:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:27.357 true 00:10:27.357 22:07:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:27.357 22:07:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.618 22:07:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.618 22:07:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:27.618 22:07:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:27.877 true 00:10:27.877 22:07:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:27.877 22:07:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.136 22:07:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.136 22:07:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:28.136 22:07:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:28.396 true 00:10:28.396 22:07:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:28.396 22:07:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.656 22:07:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.656 22:07:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:28.656 22:07:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:28.917 true 00:10:28.917 22:07:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:28.917 22:07:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.917 22:07:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.179 22:07:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:29.179 22:07:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:29.440 true 00:10:29.440 22:07:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:29.440 22:07:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.381 22:07:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.381 22:07:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:30.381 22:07:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:30.642 true 00:10:30.643 22:07:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:30.643 22:07:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.643 22:07:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.904 22:07:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:30.904 22:07:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:31.165 true 00:10:31.165 22:07:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:31.165 22:07:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.165 22:07:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.425 22:07:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:31.425 22:07:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:31.686 true 00:10:31.686 22:07:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:31.686 22:07:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.686 22:07:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.947 22:07:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:31.947 22:07:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:31.947 true 00:10:31.947 22:07:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:31.947 22:07:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.208 22:07:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.469 22:07:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:32.469 22:07:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:32.469 true 00:10:32.469 22:07:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:32.469 22:07:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.412 22:07:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.672 22:07:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:33.673 22:07:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:33.673 true 00:10:33.673 22:07:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:33.673 22:07:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.933 22:07:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.203 22:07:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:34.203 22:07:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:34.203 true 00:10:34.203 22:07:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:34.203 22:07:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.501 22:07:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.761 22:07:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:34.761 22:07:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:34.761 true 00:10:34.761 22:07:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:34.761 22:07:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.021 22:08:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.021 22:08:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:35.021 22:08:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:35.282 true 00:10:35.282 22:08:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:35.282 22:08:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.542 22:08:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.542 22:08:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:35.542 22:08:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:35.803 true 00:10:35.803 22:08:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:35.803 22:08:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.064 22:08:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.064 22:08:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:36.064 22:08:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:36.324 true 00:10:36.324 22:08:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:36.324 22:08:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.585 22:08:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.585 22:08:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:36.585 22:08:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:36.846 true 00:10:36.846 22:08:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:36.846 22:08:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.846 22:08:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.106 22:08:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:37.106 22:08:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:37.367 true 00:10:37.367 22:08:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:37.367 22:08:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.367 22:08:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.628 22:08:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:37.628 22:08:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:37.888 true 00:10:37.888 22:08:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:37.888 22:08:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.888 22:08:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.152 22:08:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:38.152 22:08:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:38.152 true 00:10:38.411 22:08:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:38.411 22:08:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.411 22:08:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.672 22:08:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:38.672 22:08:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:38.672 true 00:10:38.672 22:08:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:38.672 22:08:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.932 22:08:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.193 22:08:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:39.193 22:08:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:39.193 true 00:10:39.193 22:08:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:39.193 22:08:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.454 22:08:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.716 22:08:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:39.716 22:08:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:39.716 true 00:10:39.716 22:08:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:39.716 22:08:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.661 22:08:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.661 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.921 22:08:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:40.921 22:08:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:40.921 true 00:10:41.181 22:08:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:41.181 22:08:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.181 22:08:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.442 22:08:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:41.442 22:08:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:41.442 true 00:10:41.704 22:08:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:41.704 22:08:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.704 22:08:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.964 22:08:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:41.964 22:08:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:41.964 true 00:10:41.964 22:08:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:41.964 22:08:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.225 22:08:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.486 22:08:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:42.486 22:08:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:42.486 true 00:10:42.486 22:08:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:42.486 22:08:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.746 22:08:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.746 22:08:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:42.746 22:08:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:43.006 true 00:10:43.006 22:08:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:43.006 22:08:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.945 22:08:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.205 22:08:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:44.205 22:08:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:44.205 true 00:10:44.205 22:08:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:44.206 22:08:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.195 22:08:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.456 22:08:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:45.456 22:08:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:45.456 true 00:10:45.456 22:08:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:45.456 22:08:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.715 22:08:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.975 22:08:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:45.975 22:08:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:45.975 true 00:10:45.975 22:08:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:45.975 22:08:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.371 22:08:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.371 22:08:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:47.371 22:08:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:47.632 true 00:10:47.632 22:08:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:47.632 22:08:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.574 22:08:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.574 Initializing NVMe Controllers 00:10:48.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:48.574 Controller IO queue size 128, less than required. 00:10:48.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:48.574 Controller IO queue size 128, less than required. 00:10:48.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:48.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:48.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:48.574 Initialization complete. Launching workers. 00:10:48.574 ======================================================== 00:10:48.574 Latency(us) 00:10:48.574 Device Information : IOPS MiB/s Average min max 00:10:48.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 961.97 0.47 45422.12 2054.43 1141417.70 00:10:48.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10521.36 5.14 12165.52 2304.36 498735.18 00:10:48.574 ======================================================== 00:10:48.574 Total : 11483.33 5.61 14951.47 2054.43 1141417.70 00:10:48.574 00:10:48.574 22:08:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:48.574 22:08:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:48.574 true 00:10:48.835 22:08:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2640339 00:10:48.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2640339) - No such process 00:10:48.835 22:08:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2640339 00:10:48.835 22:08:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.835 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:49.095 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:49.095 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:49.095 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:49.095 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:49.095 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:49.095 null0 00:10:49.095 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:49.095 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:49.095 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:49.356 null1 00:10:49.356 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:49.356 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:49.356 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:49.617 null2 00:10:49.617 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:49.617 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:49.617 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:49.617 null3 00:10:49.617 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:49.617 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:49.617 22:08:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:49.878 null4 00:10:49.878 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:49.878 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:49.878 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:50.139 null5 00:10:50.139 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:50.139 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:50.139 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:50.139 null6 00:10:50.139 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:50.139 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:50.139 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:50.400 null7 00:10:50.400 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:50.400 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:50.400 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:50.400 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.400 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:50.400 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:50.400 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.400 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:50.400 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:50.400 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:50.400 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2646836 2646837 2646839 2646841 2646843 2646845 2646847 2646849 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.401 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.662 22:08:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.924 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:51.186 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.451 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:51.756 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:51.756 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:51.756 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:51.756 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.756 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:51.756 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.756 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.756 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:51.756 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:51.756 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:51.756 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.757 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.757 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:51.757 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.757 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.757 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:51.757 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.757 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.757 22:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:51.757 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.757 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.757 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:51.757 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:51.757 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.757 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.757 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:52.018 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:52.279 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:52.280 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:52.280 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.280 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:52.541 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:52.802 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:52.802 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:52.802 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.802 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:52.802 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:52.802 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:52.802 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.802 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.802 22:08:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.802 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:53.062 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:53.063 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:53.063 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:53.063 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:53.063 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.063 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:53.063 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:53.063 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:53.063 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.063 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.063 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:53.323 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:53.584 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:53.584 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.584 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.584 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:53.584 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.584 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.584 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.584 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.585 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:53.585 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.585 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:53.585 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.585 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:53.585 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.585 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.585 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:53.585 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.585 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.585 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:53.585 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.585 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.585 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:53.846 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:53.846 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:53.846 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.846 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.846 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:53.846 22:08:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:53.846 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:53.846 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.846 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.846 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.846 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.846 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:54.130 rmmod nvme_tcp 00:10:54.130 rmmod nvme_fabrics 00:10:54.130 rmmod nvme_keyring 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2639926 ']' 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2639926 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2639926 ']' 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2639926 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2639926 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2639926' 00:10:54.130 killing process with pid 2639926 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2639926 00:10:54.130 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2639926 00:10:54.392 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:54.392 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:54.392 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:54.392 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:54.392 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:54.392 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.392 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:54.392 22:08:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.308 22:08:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:56.308 00:10:56.308 real 0m47.930s 00:10:56.308 user 3m10.982s 00:10:56.308 sys 0m15.435s 00:10:56.308 22:08:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:56.308 22:08:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.308 ************************************ 00:10:56.308 END TEST nvmf_ns_hotplug_stress 00:10:56.308 ************************************ 00:10:56.308 22:08:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:56.308 22:08:21 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:56.308 22:08:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:56.308 22:08:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.308 22:08:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:56.308 ************************************ 00:10:56.308 START TEST nvmf_connect_stress 00:10:56.308 ************************************ 00:10:56.308 22:08:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:56.569 * Looking for test storage... 00:10:56.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.569 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:56.570 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:56.570 22:08:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:56.570 22:08:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:04.713 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:04.713 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:04.713 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:04.713 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:04.713 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:04.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:11:04.714 00:11:04.714 --- 10.0.0.2 ping statistics --- 00:11:04.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.714 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:11:04.714 00:11:04.714 --- 10.0.0.1 ping statistics --- 00:11:04.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.714 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2651983 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2651983 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2651983 ']' 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.714 22:08:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.714 [2024-07-15 22:08:28.962278] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:11:04.714 [2024-07-15 22:08:28.962328] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.714 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.714 [2024-07-15 22:08:29.046092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:04.714 [2024-07-15 22:08:29.122587] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.714 [2024-07-15 22:08:29.122643] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.714 [2024-07-15 22:08:29.122651] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.714 [2024-07-15 22:08:29.122658] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.714 [2024-07-15 22:08:29.122664] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.714 [2024-07-15 22:08:29.122793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.714 [2024-07-15 22:08:29.122957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.714 [2024-07-15 22:08:29.122958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.714 [2024-07-15 22:08:29.788898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.714 [2024-07-15 22:08:29.813405] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.714 NULL1 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2652025 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.714 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.714 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.715 22:08:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.975 22:08:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.975 22:08:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:04.975 22:08:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.975 22:08:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.975 22:08:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.545 22:08:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.545 22:08:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:05.545 22:08:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.545 22:08:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.545 22:08:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.830 22:08:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.830 22:08:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:05.830 22:08:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.830 22:08:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.830 22:08:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.090 22:08:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.090 22:08:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:06.090 22:08:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.090 22:08:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.090 22:08:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.350 22:08:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.350 22:08:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:06.350 22:08:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.350 22:08:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.350 22:08:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.609 22:08:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.609 22:08:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:06.609 22:08:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.609 22:08:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.609 22:08:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.180 22:08:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.180 22:08:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:07.180 22:08:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.180 22:08:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.180 22:08:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.440 22:08:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.440 22:08:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:07.440 22:08:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.440 22:08:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.440 22:08:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.700 22:08:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.700 22:08:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:07.700 22:08:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.700 22:08:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.700 22:08:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.961 22:08:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.961 22:08:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:07.961 22:08:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.961 22:08:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.961 22:08:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.221 22:08:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.221 22:08:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:08.221 22:08:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.221 22:08:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.221 22:08:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.792 22:08:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.792 22:08:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:08.792 22:08:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.792 22:08:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.792 22:08:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.051 22:08:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.051 22:08:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:09.051 22:08:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.051 22:08:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.051 22:08:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.310 22:08:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.310 22:08:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:09.310 22:08:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.310 22:08:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.310 22:08:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.570 22:08:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.570 22:08:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:09.570 22:08:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.570 22:08:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.570 22:08:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.831 22:08:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.831 22:08:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:09.831 22:08:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.831 22:08:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.831 22:08:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.401 22:08:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.401 22:08:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:10.401 22:08:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.401 22:08:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.401 22:08:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.663 22:08:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.663 22:08:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:10.663 22:08:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.663 22:08:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.663 22:08:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.924 22:08:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.924 22:08:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:10.924 22:08:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.924 22:08:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.924 22:08:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.184 22:08:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.184 22:08:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:11.184 22:08:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.184 22:08:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.184 22:08:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.754 22:08:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.754 22:08:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:11.754 22:08:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.754 22:08:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.754 22:08:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.014 22:08:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.015 22:08:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:12.015 22:08:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.015 22:08:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.015 22:08:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.274 22:08:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.274 22:08:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:12.274 22:08:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.274 22:08:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.274 22:08:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.534 22:08:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.534 22:08:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:12.534 22:08:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.534 22:08:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.534 22:08:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.794 22:08:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.794 22:08:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:12.794 22:08:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.794 22:08:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.794 22:08:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.364 22:08:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.364 22:08:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:13.364 22:08:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.364 22:08:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.364 22:08:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.625 22:08:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.625 22:08:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:13.625 22:08:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.625 22:08:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.625 22:08:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.929 22:08:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.929 22:08:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:13.929 22:08:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.929 22:08:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.929 22:08:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.211 22:08:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.211 22:08:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:14.211 22:08:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.211 22:08:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.211 22:08:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.471 22:08:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.471 22:08:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:14.471 22:08:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.471 22:08:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.471 22:08:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.732 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:14.732 22:08:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.732 22:08:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2652025 00:11:14.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2652025) - No such process 00:11:14.732 22:08:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2652025 00:11:14.732 22:08:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:14.732 22:08:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:14.732 22:08:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:14.732 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:14.732 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:14.732 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:14.732 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:14.732 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:14.732 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:14.732 rmmod nvme_tcp 00:11:14.994 rmmod nvme_fabrics 00:11:14.994 rmmod nvme_keyring 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2651983 ']' 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2651983 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2651983 ']' 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2651983 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2651983 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2651983' 00:11:14.994 killing process with pid 2651983 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2651983 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2651983 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.994 22:08:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.541 22:08:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:17.541 00:11:17.541 real 0m20.732s 00:11:17.541 user 0m41.897s 00:11:17.541 sys 0m8.662s 00:11:17.541 22:08:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.541 22:08:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.541 ************************************ 00:11:17.541 END TEST nvmf_connect_stress 00:11:17.541 ************************************ 00:11:17.541 22:08:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:17.541 22:08:42 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:17.541 22:08:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:17.541 22:08:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.541 22:08:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:17.541 ************************************ 00:11:17.541 START TEST nvmf_fused_ordering 00:11:17.541 ************************************ 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:17.541 * Looking for test storage... 00:11:17.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.541 22:08:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:17.542 22:08:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:24.126 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:24.127 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:24.127 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:24.127 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:24.127 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.127 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:24.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:11:24.387 00:11:24.387 --- 10.0.0.2 ping statistics --- 00:11:24.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.387 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:11:24.387 00:11:24.387 --- 10.0.0.1 ping statistics --- 00:11:24.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.387 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:24.387 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:24.647 22:08:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:24.647 22:08:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:24.647 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2658367 00:11:24.647 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2658367 00:11:24.647 22:08:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:24.647 22:08:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2658367 ']' 00:11:24.647 22:08:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.647 22:08:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:24.647 22:08:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.647 22:08:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:24.647 22:08:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:24.647 [2024-07-15 22:08:49.774456] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:11:24.647 [2024-07-15 22:08:49.774519] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.647 EAL: No free 2048 kB hugepages reported on node 1 00:11:24.647 [2024-07-15 22:08:49.863277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.647 [2024-07-15 22:08:49.955507] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.647 [2024-07-15 22:08:49.955561] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.647 [2024-07-15 22:08:49.955569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.647 [2024-07-15 22:08:49.955576] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.647 [2024-07-15 22:08:49.955582] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.647 [2024-07-15 22:08:49.955605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:25.590 [2024-07-15 22:08:50.608964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:25.590 [2024-07-15 22:08:50.629173] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:25.590 NULL1 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.590 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:25.591 22:08:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.591 22:08:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:25.591 [2024-07-15 22:08:50.686620] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:11:25.591 [2024-07-15 22:08:50.686665] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2658403 ] 00:11:25.591 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.852 Attached to nqn.2016-06.io.spdk:cnode1 00:11:25.852 Namespace ID: 1 size: 1GB 00:11:25.852 fused_ordering(0) 00:11:25.852 fused_ordering(1) 00:11:25.852 fused_ordering(2) 00:11:25.852 fused_ordering(3) 00:11:25.852 fused_ordering(4) 00:11:25.852 fused_ordering(5) 00:11:25.852 fused_ordering(6) 00:11:25.852 fused_ordering(7) 00:11:25.852 fused_ordering(8) 00:11:25.852 fused_ordering(9) 00:11:25.852 fused_ordering(10) 00:11:25.852 fused_ordering(11) 00:11:25.852 fused_ordering(12) 00:11:25.852 fused_ordering(13) 00:11:25.852 fused_ordering(14) 00:11:25.852 fused_ordering(15) 00:11:25.852 fused_ordering(16) 00:11:25.852 fused_ordering(17) 00:11:25.852 fused_ordering(18) 00:11:25.852 fused_ordering(19) 00:11:25.852 fused_ordering(20) 00:11:25.852 fused_ordering(21) 00:11:25.852 fused_ordering(22) 00:11:25.852 fused_ordering(23) 00:11:25.852 fused_ordering(24) 00:11:25.852 fused_ordering(25) 00:11:25.852 fused_ordering(26) 00:11:25.852 fused_ordering(27) 00:11:25.852 fused_ordering(28) 00:11:25.852 fused_ordering(29) 00:11:25.852 fused_ordering(30) 00:11:25.852 fused_ordering(31) 00:11:25.852 fused_ordering(32) 00:11:25.852 fused_ordering(33) 00:11:25.852 fused_ordering(34) 00:11:25.852 fused_ordering(35) 00:11:25.852 fused_ordering(36) 00:11:25.852 fused_ordering(37) 00:11:25.852 fused_ordering(38) 00:11:25.852 fused_ordering(39) 00:11:25.852 fused_ordering(40) 00:11:25.852 fused_ordering(41) 00:11:25.852 fused_ordering(42) 00:11:25.852 fused_ordering(43) 00:11:25.852 fused_ordering(44) 00:11:25.852 fused_ordering(45) 00:11:25.852 fused_ordering(46) 00:11:25.852 fused_ordering(47) 00:11:25.852 fused_ordering(48) 00:11:25.852 fused_ordering(49) 00:11:25.852 fused_ordering(50) 00:11:25.852 fused_ordering(51) 00:11:25.852 fused_ordering(52) 00:11:25.852 fused_ordering(53) 00:11:25.852 fused_ordering(54) 00:11:25.852 fused_ordering(55) 00:11:25.852 fused_ordering(56) 00:11:25.852 fused_ordering(57) 00:11:25.852 fused_ordering(58) 00:11:25.852 fused_ordering(59) 00:11:25.852 fused_ordering(60) 00:11:25.852 fused_ordering(61) 00:11:25.852 fused_ordering(62) 00:11:25.852 fused_ordering(63) 00:11:25.852 fused_ordering(64) 00:11:25.852 fused_ordering(65) 00:11:25.852 fused_ordering(66) 00:11:25.852 fused_ordering(67) 00:11:25.852 fused_ordering(68) 00:11:25.852 fused_ordering(69) 00:11:25.852 fused_ordering(70) 00:11:25.852 fused_ordering(71) 00:11:25.852 fused_ordering(72) 00:11:25.852 fused_ordering(73) 00:11:25.852 fused_ordering(74) 00:11:25.852 fused_ordering(75) 00:11:25.852 fused_ordering(76) 00:11:25.852 fused_ordering(77) 00:11:25.852 fused_ordering(78) 00:11:25.852 fused_ordering(79) 00:11:25.852 fused_ordering(80) 00:11:25.852 fused_ordering(81) 00:11:25.852 fused_ordering(82) 00:11:25.852 fused_ordering(83) 00:11:25.852 fused_ordering(84) 00:11:25.852 fused_ordering(85) 00:11:25.852 fused_ordering(86) 00:11:25.852 fused_ordering(87) 00:11:25.852 fused_ordering(88) 00:11:25.852 fused_ordering(89) 00:11:25.852 fused_ordering(90) 00:11:25.852 fused_ordering(91) 00:11:25.852 fused_ordering(92) 00:11:25.852 fused_ordering(93) 00:11:25.852 fused_ordering(94) 00:11:25.852 fused_ordering(95) 00:11:25.852 fused_ordering(96) 00:11:25.852 fused_ordering(97) 00:11:25.852 fused_ordering(98) 00:11:25.852 fused_ordering(99) 00:11:25.852 fused_ordering(100) 00:11:25.852 fused_ordering(101) 00:11:25.852 fused_ordering(102) 00:11:25.852 fused_ordering(103) 00:11:25.852 fused_ordering(104) 00:11:25.852 fused_ordering(105) 00:11:25.852 fused_ordering(106) 00:11:25.852 fused_ordering(107) 00:11:25.852 fused_ordering(108) 00:11:25.852 fused_ordering(109) 00:11:25.852 fused_ordering(110) 00:11:25.852 fused_ordering(111) 00:11:25.852 fused_ordering(112) 00:11:25.852 fused_ordering(113) 00:11:25.852 fused_ordering(114) 00:11:25.852 fused_ordering(115) 00:11:25.852 fused_ordering(116) 00:11:25.852 fused_ordering(117) 00:11:25.852 fused_ordering(118) 00:11:25.852 fused_ordering(119) 00:11:25.852 fused_ordering(120) 00:11:25.852 fused_ordering(121) 00:11:25.852 fused_ordering(122) 00:11:25.852 fused_ordering(123) 00:11:25.852 fused_ordering(124) 00:11:25.852 fused_ordering(125) 00:11:25.852 fused_ordering(126) 00:11:25.852 fused_ordering(127) 00:11:25.852 fused_ordering(128) 00:11:25.852 fused_ordering(129) 00:11:25.852 fused_ordering(130) 00:11:25.852 fused_ordering(131) 00:11:25.852 fused_ordering(132) 00:11:25.852 fused_ordering(133) 00:11:25.852 fused_ordering(134) 00:11:25.852 fused_ordering(135) 00:11:25.852 fused_ordering(136) 00:11:25.852 fused_ordering(137) 00:11:25.852 fused_ordering(138) 00:11:25.852 fused_ordering(139) 00:11:25.852 fused_ordering(140) 00:11:25.852 fused_ordering(141) 00:11:25.852 fused_ordering(142) 00:11:25.852 fused_ordering(143) 00:11:25.852 fused_ordering(144) 00:11:25.852 fused_ordering(145) 00:11:25.852 fused_ordering(146) 00:11:25.852 fused_ordering(147) 00:11:25.852 fused_ordering(148) 00:11:25.852 fused_ordering(149) 00:11:25.852 fused_ordering(150) 00:11:25.852 fused_ordering(151) 00:11:25.852 fused_ordering(152) 00:11:25.852 fused_ordering(153) 00:11:25.852 fused_ordering(154) 00:11:25.852 fused_ordering(155) 00:11:25.852 fused_ordering(156) 00:11:25.852 fused_ordering(157) 00:11:25.852 fused_ordering(158) 00:11:25.852 fused_ordering(159) 00:11:25.852 fused_ordering(160) 00:11:25.852 fused_ordering(161) 00:11:25.852 fused_ordering(162) 00:11:25.852 fused_ordering(163) 00:11:25.852 fused_ordering(164) 00:11:25.852 fused_ordering(165) 00:11:25.853 fused_ordering(166) 00:11:25.853 fused_ordering(167) 00:11:25.853 fused_ordering(168) 00:11:25.853 fused_ordering(169) 00:11:25.853 fused_ordering(170) 00:11:25.853 fused_ordering(171) 00:11:25.853 fused_ordering(172) 00:11:25.853 fused_ordering(173) 00:11:25.853 fused_ordering(174) 00:11:25.853 fused_ordering(175) 00:11:25.853 fused_ordering(176) 00:11:25.853 fused_ordering(177) 00:11:25.853 fused_ordering(178) 00:11:25.853 fused_ordering(179) 00:11:25.853 fused_ordering(180) 00:11:25.853 fused_ordering(181) 00:11:25.853 fused_ordering(182) 00:11:25.853 fused_ordering(183) 00:11:25.853 fused_ordering(184) 00:11:25.853 fused_ordering(185) 00:11:25.853 fused_ordering(186) 00:11:25.853 fused_ordering(187) 00:11:25.853 fused_ordering(188) 00:11:25.853 fused_ordering(189) 00:11:25.853 fused_ordering(190) 00:11:25.853 fused_ordering(191) 00:11:25.853 fused_ordering(192) 00:11:25.853 fused_ordering(193) 00:11:25.853 fused_ordering(194) 00:11:25.853 fused_ordering(195) 00:11:25.853 fused_ordering(196) 00:11:25.853 fused_ordering(197) 00:11:25.853 fused_ordering(198) 00:11:25.853 fused_ordering(199) 00:11:25.853 fused_ordering(200) 00:11:25.853 fused_ordering(201) 00:11:25.853 fused_ordering(202) 00:11:25.853 fused_ordering(203) 00:11:25.853 fused_ordering(204) 00:11:25.853 fused_ordering(205) 00:11:26.426 fused_ordering(206) 00:11:26.426 fused_ordering(207) 00:11:26.426 fused_ordering(208) 00:11:26.426 fused_ordering(209) 00:11:26.426 fused_ordering(210) 00:11:26.426 fused_ordering(211) 00:11:26.426 fused_ordering(212) 00:11:26.426 fused_ordering(213) 00:11:26.426 fused_ordering(214) 00:11:26.426 fused_ordering(215) 00:11:26.426 fused_ordering(216) 00:11:26.426 fused_ordering(217) 00:11:26.426 fused_ordering(218) 00:11:26.426 fused_ordering(219) 00:11:26.426 fused_ordering(220) 00:11:26.426 fused_ordering(221) 00:11:26.426 fused_ordering(222) 00:11:26.426 fused_ordering(223) 00:11:26.426 fused_ordering(224) 00:11:26.426 fused_ordering(225) 00:11:26.426 fused_ordering(226) 00:11:26.426 fused_ordering(227) 00:11:26.426 fused_ordering(228) 00:11:26.426 fused_ordering(229) 00:11:26.426 fused_ordering(230) 00:11:26.426 fused_ordering(231) 00:11:26.426 fused_ordering(232) 00:11:26.426 fused_ordering(233) 00:11:26.426 fused_ordering(234) 00:11:26.426 fused_ordering(235) 00:11:26.426 fused_ordering(236) 00:11:26.426 fused_ordering(237) 00:11:26.426 fused_ordering(238) 00:11:26.426 fused_ordering(239) 00:11:26.426 fused_ordering(240) 00:11:26.426 fused_ordering(241) 00:11:26.426 fused_ordering(242) 00:11:26.426 fused_ordering(243) 00:11:26.426 fused_ordering(244) 00:11:26.426 fused_ordering(245) 00:11:26.426 fused_ordering(246) 00:11:26.426 fused_ordering(247) 00:11:26.426 fused_ordering(248) 00:11:26.426 fused_ordering(249) 00:11:26.426 fused_ordering(250) 00:11:26.426 fused_ordering(251) 00:11:26.426 fused_ordering(252) 00:11:26.426 fused_ordering(253) 00:11:26.426 fused_ordering(254) 00:11:26.426 fused_ordering(255) 00:11:26.426 fused_ordering(256) 00:11:26.426 fused_ordering(257) 00:11:26.426 fused_ordering(258) 00:11:26.426 fused_ordering(259) 00:11:26.426 fused_ordering(260) 00:11:26.426 fused_ordering(261) 00:11:26.426 fused_ordering(262) 00:11:26.426 fused_ordering(263) 00:11:26.426 fused_ordering(264) 00:11:26.426 fused_ordering(265) 00:11:26.426 fused_ordering(266) 00:11:26.426 fused_ordering(267) 00:11:26.426 fused_ordering(268) 00:11:26.426 fused_ordering(269) 00:11:26.426 fused_ordering(270) 00:11:26.426 fused_ordering(271) 00:11:26.426 fused_ordering(272) 00:11:26.426 fused_ordering(273) 00:11:26.426 fused_ordering(274) 00:11:26.426 fused_ordering(275) 00:11:26.426 fused_ordering(276) 00:11:26.426 fused_ordering(277) 00:11:26.426 fused_ordering(278) 00:11:26.426 fused_ordering(279) 00:11:26.426 fused_ordering(280) 00:11:26.426 fused_ordering(281) 00:11:26.426 fused_ordering(282) 00:11:26.426 fused_ordering(283) 00:11:26.426 fused_ordering(284) 00:11:26.426 fused_ordering(285) 00:11:26.426 fused_ordering(286) 00:11:26.426 fused_ordering(287) 00:11:26.426 fused_ordering(288) 00:11:26.426 fused_ordering(289) 00:11:26.426 fused_ordering(290) 00:11:26.426 fused_ordering(291) 00:11:26.426 fused_ordering(292) 00:11:26.426 fused_ordering(293) 00:11:26.426 fused_ordering(294) 00:11:26.426 fused_ordering(295) 00:11:26.426 fused_ordering(296) 00:11:26.426 fused_ordering(297) 00:11:26.426 fused_ordering(298) 00:11:26.426 fused_ordering(299) 00:11:26.426 fused_ordering(300) 00:11:26.426 fused_ordering(301) 00:11:26.426 fused_ordering(302) 00:11:26.426 fused_ordering(303) 00:11:26.426 fused_ordering(304) 00:11:26.426 fused_ordering(305) 00:11:26.426 fused_ordering(306) 00:11:26.426 fused_ordering(307) 00:11:26.426 fused_ordering(308) 00:11:26.426 fused_ordering(309) 00:11:26.426 fused_ordering(310) 00:11:26.426 fused_ordering(311) 00:11:26.426 fused_ordering(312) 00:11:26.426 fused_ordering(313) 00:11:26.426 fused_ordering(314) 00:11:26.426 fused_ordering(315) 00:11:26.426 fused_ordering(316) 00:11:26.426 fused_ordering(317) 00:11:26.426 fused_ordering(318) 00:11:26.426 fused_ordering(319) 00:11:26.426 fused_ordering(320) 00:11:26.426 fused_ordering(321) 00:11:26.426 fused_ordering(322) 00:11:26.426 fused_ordering(323) 00:11:26.426 fused_ordering(324) 00:11:26.426 fused_ordering(325) 00:11:26.426 fused_ordering(326) 00:11:26.426 fused_ordering(327) 00:11:26.426 fused_ordering(328) 00:11:26.426 fused_ordering(329) 00:11:26.426 fused_ordering(330) 00:11:26.426 fused_ordering(331) 00:11:26.426 fused_ordering(332) 00:11:26.426 fused_ordering(333) 00:11:26.426 fused_ordering(334) 00:11:26.426 fused_ordering(335) 00:11:26.426 fused_ordering(336) 00:11:26.426 fused_ordering(337) 00:11:26.426 fused_ordering(338) 00:11:26.426 fused_ordering(339) 00:11:26.426 fused_ordering(340) 00:11:26.426 fused_ordering(341) 00:11:26.426 fused_ordering(342) 00:11:26.426 fused_ordering(343) 00:11:26.426 fused_ordering(344) 00:11:26.426 fused_ordering(345) 00:11:26.426 fused_ordering(346) 00:11:26.426 fused_ordering(347) 00:11:26.426 fused_ordering(348) 00:11:26.426 fused_ordering(349) 00:11:26.426 fused_ordering(350) 00:11:26.426 fused_ordering(351) 00:11:26.426 fused_ordering(352) 00:11:26.426 fused_ordering(353) 00:11:26.426 fused_ordering(354) 00:11:26.426 fused_ordering(355) 00:11:26.426 fused_ordering(356) 00:11:26.426 fused_ordering(357) 00:11:26.426 fused_ordering(358) 00:11:26.426 fused_ordering(359) 00:11:26.426 fused_ordering(360) 00:11:26.426 fused_ordering(361) 00:11:26.426 fused_ordering(362) 00:11:26.426 fused_ordering(363) 00:11:26.426 fused_ordering(364) 00:11:26.426 fused_ordering(365) 00:11:26.426 fused_ordering(366) 00:11:26.426 fused_ordering(367) 00:11:26.426 fused_ordering(368) 00:11:26.426 fused_ordering(369) 00:11:26.426 fused_ordering(370) 00:11:26.426 fused_ordering(371) 00:11:26.426 fused_ordering(372) 00:11:26.426 fused_ordering(373) 00:11:26.426 fused_ordering(374) 00:11:26.426 fused_ordering(375) 00:11:26.426 fused_ordering(376) 00:11:26.426 fused_ordering(377) 00:11:26.426 fused_ordering(378) 00:11:26.426 fused_ordering(379) 00:11:26.426 fused_ordering(380) 00:11:26.426 fused_ordering(381) 00:11:26.426 fused_ordering(382) 00:11:26.426 fused_ordering(383) 00:11:26.426 fused_ordering(384) 00:11:26.426 fused_ordering(385) 00:11:26.426 fused_ordering(386) 00:11:26.426 fused_ordering(387) 00:11:26.426 fused_ordering(388) 00:11:26.426 fused_ordering(389) 00:11:26.426 fused_ordering(390) 00:11:26.426 fused_ordering(391) 00:11:26.426 fused_ordering(392) 00:11:26.426 fused_ordering(393) 00:11:26.426 fused_ordering(394) 00:11:26.426 fused_ordering(395) 00:11:26.426 fused_ordering(396) 00:11:26.426 fused_ordering(397) 00:11:26.426 fused_ordering(398) 00:11:26.426 fused_ordering(399) 00:11:26.426 fused_ordering(400) 00:11:26.426 fused_ordering(401) 00:11:26.426 fused_ordering(402) 00:11:26.426 fused_ordering(403) 00:11:26.426 fused_ordering(404) 00:11:26.426 fused_ordering(405) 00:11:26.426 fused_ordering(406) 00:11:26.426 fused_ordering(407) 00:11:26.426 fused_ordering(408) 00:11:26.426 fused_ordering(409) 00:11:26.426 fused_ordering(410) 00:11:26.997 fused_ordering(411) 00:11:26.997 fused_ordering(412) 00:11:26.997 fused_ordering(413) 00:11:26.997 fused_ordering(414) 00:11:26.997 fused_ordering(415) 00:11:26.997 fused_ordering(416) 00:11:26.997 fused_ordering(417) 00:11:26.997 fused_ordering(418) 00:11:26.997 fused_ordering(419) 00:11:26.997 fused_ordering(420) 00:11:26.997 fused_ordering(421) 00:11:26.997 fused_ordering(422) 00:11:26.997 fused_ordering(423) 00:11:26.997 fused_ordering(424) 00:11:26.997 fused_ordering(425) 00:11:26.997 fused_ordering(426) 00:11:26.997 fused_ordering(427) 00:11:26.997 fused_ordering(428) 00:11:26.997 fused_ordering(429) 00:11:26.997 fused_ordering(430) 00:11:26.997 fused_ordering(431) 00:11:26.997 fused_ordering(432) 00:11:26.997 fused_ordering(433) 00:11:26.997 fused_ordering(434) 00:11:26.997 fused_ordering(435) 00:11:26.997 fused_ordering(436) 00:11:26.997 fused_ordering(437) 00:11:26.997 fused_ordering(438) 00:11:26.997 fused_ordering(439) 00:11:26.997 fused_ordering(440) 00:11:26.997 fused_ordering(441) 00:11:26.997 fused_ordering(442) 00:11:26.997 fused_ordering(443) 00:11:26.997 fused_ordering(444) 00:11:26.997 fused_ordering(445) 00:11:26.997 fused_ordering(446) 00:11:26.997 fused_ordering(447) 00:11:26.997 fused_ordering(448) 00:11:26.997 fused_ordering(449) 00:11:26.997 fused_ordering(450) 00:11:26.997 fused_ordering(451) 00:11:26.997 fused_ordering(452) 00:11:26.997 fused_ordering(453) 00:11:26.997 fused_ordering(454) 00:11:26.997 fused_ordering(455) 00:11:26.997 fused_ordering(456) 00:11:26.997 fused_ordering(457) 00:11:26.997 fused_ordering(458) 00:11:26.997 fused_ordering(459) 00:11:26.997 fused_ordering(460) 00:11:26.997 fused_ordering(461) 00:11:26.997 fused_ordering(462) 00:11:26.997 fused_ordering(463) 00:11:26.997 fused_ordering(464) 00:11:26.997 fused_ordering(465) 00:11:26.997 fused_ordering(466) 00:11:26.997 fused_ordering(467) 00:11:26.997 fused_ordering(468) 00:11:26.997 fused_ordering(469) 00:11:26.997 fused_ordering(470) 00:11:26.997 fused_ordering(471) 00:11:26.997 fused_ordering(472) 00:11:26.998 fused_ordering(473) 00:11:26.998 fused_ordering(474) 00:11:26.998 fused_ordering(475) 00:11:26.998 fused_ordering(476) 00:11:26.998 fused_ordering(477) 00:11:26.998 fused_ordering(478) 00:11:26.998 fused_ordering(479) 00:11:26.998 fused_ordering(480) 00:11:26.998 fused_ordering(481) 00:11:26.998 fused_ordering(482) 00:11:26.998 fused_ordering(483) 00:11:26.998 fused_ordering(484) 00:11:26.998 fused_ordering(485) 00:11:26.998 fused_ordering(486) 00:11:26.998 fused_ordering(487) 00:11:26.998 fused_ordering(488) 00:11:26.998 fused_ordering(489) 00:11:26.998 fused_ordering(490) 00:11:26.998 fused_ordering(491) 00:11:26.998 fused_ordering(492) 00:11:26.998 fused_ordering(493) 00:11:26.998 fused_ordering(494) 00:11:26.998 fused_ordering(495) 00:11:26.998 fused_ordering(496) 00:11:26.998 fused_ordering(497) 00:11:26.998 fused_ordering(498) 00:11:26.998 fused_ordering(499) 00:11:26.998 fused_ordering(500) 00:11:26.998 fused_ordering(501) 00:11:26.998 fused_ordering(502) 00:11:26.998 fused_ordering(503) 00:11:26.998 fused_ordering(504) 00:11:26.998 fused_ordering(505) 00:11:26.998 fused_ordering(506) 00:11:26.998 fused_ordering(507) 00:11:26.998 fused_ordering(508) 00:11:26.998 fused_ordering(509) 00:11:26.998 fused_ordering(510) 00:11:26.998 fused_ordering(511) 00:11:26.998 fused_ordering(512) 00:11:26.998 fused_ordering(513) 00:11:26.998 fused_ordering(514) 00:11:26.998 fused_ordering(515) 00:11:26.998 fused_ordering(516) 00:11:26.998 fused_ordering(517) 00:11:26.998 fused_ordering(518) 00:11:26.998 fused_ordering(519) 00:11:26.998 fused_ordering(520) 00:11:26.998 fused_ordering(521) 00:11:26.998 fused_ordering(522) 00:11:26.998 fused_ordering(523) 00:11:26.998 fused_ordering(524) 00:11:26.998 fused_ordering(525) 00:11:26.998 fused_ordering(526) 00:11:26.998 fused_ordering(527) 00:11:26.998 fused_ordering(528) 00:11:26.998 fused_ordering(529) 00:11:26.998 fused_ordering(530) 00:11:26.998 fused_ordering(531) 00:11:26.998 fused_ordering(532) 00:11:26.998 fused_ordering(533) 00:11:26.998 fused_ordering(534) 00:11:26.998 fused_ordering(535) 00:11:26.998 fused_ordering(536) 00:11:26.998 fused_ordering(537) 00:11:26.998 fused_ordering(538) 00:11:26.998 fused_ordering(539) 00:11:26.998 fused_ordering(540) 00:11:26.998 fused_ordering(541) 00:11:26.998 fused_ordering(542) 00:11:26.998 fused_ordering(543) 00:11:26.998 fused_ordering(544) 00:11:26.998 fused_ordering(545) 00:11:26.998 fused_ordering(546) 00:11:26.998 fused_ordering(547) 00:11:26.998 fused_ordering(548) 00:11:26.998 fused_ordering(549) 00:11:26.998 fused_ordering(550) 00:11:26.998 fused_ordering(551) 00:11:26.998 fused_ordering(552) 00:11:26.998 fused_ordering(553) 00:11:26.998 fused_ordering(554) 00:11:26.998 fused_ordering(555) 00:11:26.998 fused_ordering(556) 00:11:26.998 fused_ordering(557) 00:11:26.998 fused_ordering(558) 00:11:26.998 fused_ordering(559) 00:11:26.998 fused_ordering(560) 00:11:26.998 fused_ordering(561) 00:11:26.998 fused_ordering(562) 00:11:26.998 fused_ordering(563) 00:11:26.998 fused_ordering(564) 00:11:26.998 fused_ordering(565) 00:11:26.998 fused_ordering(566) 00:11:26.998 fused_ordering(567) 00:11:26.998 fused_ordering(568) 00:11:26.998 fused_ordering(569) 00:11:26.998 fused_ordering(570) 00:11:26.998 fused_ordering(571) 00:11:26.998 fused_ordering(572) 00:11:26.998 fused_ordering(573) 00:11:26.998 fused_ordering(574) 00:11:26.998 fused_ordering(575) 00:11:26.998 fused_ordering(576) 00:11:26.998 fused_ordering(577) 00:11:26.998 fused_ordering(578) 00:11:26.998 fused_ordering(579) 00:11:26.998 fused_ordering(580) 00:11:26.998 fused_ordering(581) 00:11:26.998 fused_ordering(582) 00:11:26.998 fused_ordering(583) 00:11:26.998 fused_ordering(584) 00:11:26.998 fused_ordering(585) 00:11:26.998 fused_ordering(586) 00:11:26.998 fused_ordering(587) 00:11:26.998 fused_ordering(588) 00:11:26.998 fused_ordering(589) 00:11:26.998 fused_ordering(590) 00:11:26.998 fused_ordering(591) 00:11:26.998 fused_ordering(592) 00:11:26.998 fused_ordering(593) 00:11:26.998 fused_ordering(594) 00:11:26.998 fused_ordering(595) 00:11:26.998 fused_ordering(596) 00:11:26.998 fused_ordering(597) 00:11:26.998 fused_ordering(598) 00:11:26.998 fused_ordering(599) 00:11:26.998 fused_ordering(600) 00:11:26.998 fused_ordering(601) 00:11:26.998 fused_ordering(602) 00:11:26.998 fused_ordering(603) 00:11:26.998 fused_ordering(604) 00:11:26.998 fused_ordering(605) 00:11:26.998 fused_ordering(606) 00:11:26.998 fused_ordering(607) 00:11:26.998 fused_ordering(608) 00:11:26.998 fused_ordering(609) 00:11:26.998 fused_ordering(610) 00:11:26.998 fused_ordering(611) 00:11:26.998 fused_ordering(612) 00:11:26.998 fused_ordering(613) 00:11:26.998 fused_ordering(614) 00:11:26.998 fused_ordering(615) 00:11:27.569 fused_ordering(616) 00:11:27.569 fused_ordering(617) 00:11:27.569 fused_ordering(618) 00:11:27.569 fused_ordering(619) 00:11:27.569 fused_ordering(620) 00:11:27.569 fused_ordering(621) 00:11:27.569 fused_ordering(622) 00:11:27.569 fused_ordering(623) 00:11:27.569 fused_ordering(624) 00:11:27.569 fused_ordering(625) 00:11:27.569 fused_ordering(626) 00:11:27.569 fused_ordering(627) 00:11:27.569 fused_ordering(628) 00:11:27.569 fused_ordering(629) 00:11:27.569 fused_ordering(630) 00:11:27.569 fused_ordering(631) 00:11:27.569 fused_ordering(632) 00:11:27.569 fused_ordering(633) 00:11:27.569 fused_ordering(634) 00:11:27.569 fused_ordering(635) 00:11:27.569 fused_ordering(636) 00:11:27.569 fused_ordering(637) 00:11:27.569 fused_ordering(638) 00:11:27.569 fused_ordering(639) 00:11:27.569 fused_ordering(640) 00:11:27.569 fused_ordering(641) 00:11:27.569 fused_ordering(642) 00:11:27.569 fused_ordering(643) 00:11:27.569 fused_ordering(644) 00:11:27.570 fused_ordering(645) 00:11:27.570 fused_ordering(646) 00:11:27.570 fused_ordering(647) 00:11:27.570 fused_ordering(648) 00:11:27.570 fused_ordering(649) 00:11:27.570 fused_ordering(650) 00:11:27.570 fused_ordering(651) 00:11:27.570 fused_ordering(652) 00:11:27.570 fused_ordering(653) 00:11:27.570 fused_ordering(654) 00:11:27.570 fused_ordering(655) 00:11:27.570 fused_ordering(656) 00:11:27.570 fused_ordering(657) 00:11:27.570 fused_ordering(658) 00:11:27.570 fused_ordering(659) 00:11:27.570 fused_ordering(660) 00:11:27.570 fused_ordering(661) 00:11:27.570 fused_ordering(662) 00:11:27.570 fused_ordering(663) 00:11:27.570 fused_ordering(664) 00:11:27.570 fused_ordering(665) 00:11:27.570 fused_ordering(666) 00:11:27.570 fused_ordering(667) 00:11:27.570 fused_ordering(668) 00:11:27.570 fused_ordering(669) 00:11:27.570 fused_ordering(670) 00:11:27.570 fused_ordering(671) 00:11:27.570 fused_ordering(672) 00:11:27.570 fused_ordering(673) 00:11:27.570 fused_ordering(674) 00:11:27.570 fused_ordering(675) 00:11:27.570 fused_ordering(676) 00:11:27.570 fused_ordering(677) 00:11:27.570 fused_ordering(678) 00:11:27.570 fused_ordering(679) 00:11:27.570 fused_ordering(680) 00:11:27.570 fused_ordering(681) 00:11:27.570 fused_ordering(682) 00:11:27.570 fused_ordering(683) 00:11:27.570 fused_ordering(684) 00:11:27.570 fused_ordering(685) 00:11:27.570 fused_ordering(686) 00:11:27.570 fused_ordering(687) 00:11:27.570 fused_ordering(688) 00:11:27.570 fused_ordering(689) 00:11:27.570 fused_ordering(690) 00:11:27.570 fused_ordering(691) 00:11:27.570 fused_ordering(692) 00:11:27.570 fused_ordering(693) 00:11:27.570 fused_ordering(694) 00:11:27.570 fused_ordering(695) 00:11:27.570 fused_ordering(696) 00:11:27.570 fused_ordering(697) 00:11:27.570 fused_ordering(698) 00:11:27.570 fused_ordering(699) 00:11:27.570 fused_ordering(700) 00:11:27.570 fused_ordering(701) 00:11:27.570 fused_ordering(702) 00:11:27.570 fused_ordering(703) 00:11:27.570 fused_ordering(704) 00:11:27.570 fused_ordering(705) 00:11:27.570 fused_ordering(706) 00:11:27.570 fused_ordering(707) 00:11:27.570 fused_ordering(708) 00:11:27.570 fused_ordering(709) 00:11:27.570 fused_ordering(710) 00:11:27.570 fused_ordering(711) 00:11:27.570 fused_ordering(712) 00:11:27.570 fused_ordering(713) 00:11:27.570 fused_ordering(714) 00:11:27.570 fused_ordering(715) 00:11:27.570 fused_ordering(716) 00:11:27.570 fused_ordering(717) 00:11:27.570 fused_ordering(718) 00:11:27.570 fused_ordering(719) 00:11:27.570 fused_ordering(720) 00:11:27.570 fused_ordering(721) 00:11:27.570 fused_ordering(722) 00:11:27.570 fused_ordering(723) 00:11:27.570 fused_ordering(724) 00:11:27.570 fused_ordering(725) 00:11:27.570 fused_ordering(726) 00:11:27.570 fused_ordering(727) 00:11:27.570 fused_ordering(728) 00:11:27.570 fused_ordering(729) 00:11:27.570 fused_ordering(730) 00:11:27.570 fused_ordering(731) 00:11:27.570 fused_ordering(732) 00:11:27.570 fused_ordering(733) 00:11:27.570 fused_ordering(734) 00:11:27.570 fused_ordering(735) 00:11:27.570 fused_ordering(736) 00:11:27.570 fused_ordering(737) 00:11:27.570 fused_ordering(738) 00:11:27.570 fused_ordering(739) 00:11:27.570 fused_ordering(740) 00:11:27.570 fused_ordering(741) 00:11:27.570 fused_ordering(742) 00:11:27.570 fused_ordering(743) 00:11:27.570 fused_ordering(744) 00:11:27.570 fused_ordering(745) 00:11:27.570 fused_ordering(746) 00:11:27.570 fused_ordering(747) 00:11:27.570 fused_ordering(748) 00:11:27.570 fused_ordering(749) 00:11:27.570 fused_ordering(750) 00:11:27.570 fused_ordering(751) 00:11:27.570 fused_ordering(752) 00:11:27.570 fused_ordering(753) 00:11:27.570 fused_ordering(754) 00:11:27.570 fused_ordering(755) 00:11:27.570 fused_ordering(756) 00:11:27.570 fused_ordering(757) 00:11:27.570 fused_ordering(758) 00:11:27.570 fused_ordering(759) 00:11:27.570 fused_ordering(760) 00:11:27.570 fused_ordering(761) 00:11:27.570 fused_ordering(762) 00:11:27.570 fused_ordering(763) 00:11:27.570 fused_ordering(764) 00:11:27.570 fused_ordering(765) 00:11:27.570 fused_ordering(766) 00:11:27.570 fused_ordering(767) 00:11:27.570 fused_ordering(768) 00:11:27.570 fused_ordering(769) 00:11:27.570 fused_ordering(770) 00:11:27.570 fused_ordering(771) 00:11:27.570 fused_ordering(772) 00:11:27.570 fused_ordering(773) 00:11:27.570 fused_ordering(774) 00:11:27.570 fused_ordering(775) 00:11:27.570 fused_ordering(776) 00:11:27.570 fused_ordering(777) 00:11:27.570 fused_ordering(778) 00:11:27.570 fused_ordering(779) 00:11:27.570 fused_ordering(780) 00:11:27.570 fused_ordering(781) 00:11:27.570 fused_ordering(782) 00:11:27.570 fused_ordering(783) 00:11:27.570 fused_ordering(784) 00:11:27.570 fused_ordering(785) 00:11:27.570 fused_ordering(786) 00:11:27.570 fused_ordering(787) 00:11:27.570 fused_ordering(788) 00:11:27.570 fused_ordering(789) 00:11:27.570 fused_ordering(790) 00:11:27.570 fused_ordering(791) 00:11:27.570 fused_ordering(792) 00:11:27.570 fused_ordering(793) 00:11:27.570 fused_ordering(794) 00:11:27.570 fused_ordering(795) 00:11:27.570 fused_ordering(796) 00:11:27.570 fused_ordering(797) 00:11:27.570 fused_ordering(798) 00:11:27.570 fused_ordering(799) 00:11:27.570 fused_ordering(800) 00:11:27.570 fused_ordering(801) 00:11:27.570 fused_ordering(802) 00:11:27.570 fused_ordering(803) 00:11:27.570 fused_ordering(804) 00:11:27.570 fused_ordering(805) 00:11:27.570 fused_ordering(806) 00:11:27.570 fused_ordering(807) 00:11:27.570 fused_ordering(808) 00:11:27.570 fused_ordering(809) 00:11:27.570 fused_ordering(810) 00:11:27.570 fused_ordering(811) 00:11:27.570 fused_ordering(812) 00:11:27.570 fused_ordering(813) 00:11:27.570 fused_ordering(814) 00:11:27.570 fused_ordering(815) 00:11:27.570 fused_ordering(816) 00:11:27.570 fused_ordering(817) 00:11:27.570 fused_ordering(818) 00:11:27.570 fused_ordering(819) 00:11:27.570 fused_ordering(820) 00:11:28.179 fused_ordering(821) 00:11:28.179 fused_ordering(822) 00:11:28.179 fused_ordering(823) 00:11:28.179 fused_ordering(824) 00:11:28.179 fused_ordering(825) 00:11:28.179 fused_ordering(826) 00:11:28.179 fused_ordering(827) 00:11:28.179 fused_ordering(828) 00:11:28.179 fused_ordering(829) 00:11:28.179 fused_ordering(830) 00:11:28.179 fused_ordering(831) 00:11:28.179 fused_ordering(832) 00:11:28.179 fused_ordering(833) 00:11:28.179 fused_ordering(834) 00:11:28.179 fused_ordering(835) 00:11:28.179 fused_ordering(836) 00:11:28.179 fused_ordering(837) 00:11:28.179 fused_ordering(838) 00:11:28.179 fused_ordering(839) 00:11:28.179 fused_ordering(840) 00:11:28.179 fused_ordering(841) 00:11:28.179 fused_ordering(842) 00:11:28.179 fused_ordering(843) 00:11:28.179 fused_ordering(844) 00:11:28.179 fused_ordering(845) 00:11:28.179 fused_ordering(846) 00:11:28.179 fused_ordering(847) 00:11:28.179 fused_ordering(848) 00:11:28.179 fused_ordering(849) 00:11:28.179 fused_ordering(850) 00:11:28.179 fused_ordering(851) 00:11:28.179 fused_ordering(852) 00:11:28.179 fused_ordering(853) 00:11:28.179 fused_ordering(854) 00:11:28.179 fused_ordering(855) 00:11:28.179 fused_ordering(856) 00:11:28.179 fused_ordering(857) 00:11:28.179 fused_ordering(858) 00:11:28.179 fused_ordering(859) 00:11:28.179 fused_ordering(860) 00:11:28.179 fused_ordering(861) 00:11:28.179 fused_ordering(862) 00:11:28.179 fused_ordering(863) 00:11:28.179 fused_ordering(864) 00:11:28.179 fused_ordering(865) 00:11:28.179 fused_ordering(866) 00:11:28.179 fused_ordering(867) 00:11:28.179 fused_ordering(868) 00:11:28.179 fused_ordering(869) 00:11:28.179 fused_ordering(870) 00:11:28.179 fused_ordering(871) 00:11:28.179 fused_ordering(872) 00:11:28.179 fused_ordering(873) 00:11:28.179 fused_ordering(874) 00:11:28.179 fused_ordering(875) 00:11:28.179 fused_ordering(876) 00:11:28.179 fused_ordering(877) 00:11:28.179 fused_ordering(878) 00:11:28.179 fused_ordering(879) 00:11:28.179 fused_ordering(880) 00:11:28.179 fused_ordering(881) 00:11:28.179 fused_ordering(882) 00:11:28.179 fused_ordering(883) 00:11:28.179 fused_ordering(884) 00:11:28.179 fused_ordering(885) 00:11:28.179 fused_ordering(886) 00:11:28.179 fused_ordering(887) 00:11:28.179 fused_ordering(888) 00:11:28.179 fused_ordering(889) 00:11:28.179 fused_ordering(890) 00:11:28.179 fused_ordering(891) 00:11:28.179 fused_ordering(892) 00:11:28.179 fused_ordering(893) 00:11:28.179 fused_ordering(894) 00:11:28.179 fused_ordering(895) 00:11:28.179 fused_ordering(896) 00:11:28.179 fused_ordering(897) 00:11:28.179 fused_ordering(898) 00:11:28.179 fused_ordering(899) 00:11:28.179 fused_ordering(900) 00:11:28.179 fused_ordering(901) 00:11:28.179 fused_ordering(902) 00:11:28.179 fused_ordering(903) 00:11:28.179 fused_ordering(904) 00:11:28.179 fused_ordering(905) 00:11:28.179 fused_ordering(906) 00:11:28.179 fused_ordering(907) 00:11:28.179 fused_ordering(908) 00:11:28.179 fused_ordering(909) 00:11:28.179 fused_ordering(910) 00:11:28.179 fused_ordering(911) 00:11:28.179 fused_ordering(912) 00:11:28.179 fused_ordering(913) 00:11:28.179 fused_ordering(914) 00:11:28.179 fused_ordering(915) 00:11:28.179 fused_ordering(916) 00:11:28.179 fused_ordering(917) 00:11:28.179 fused_ordering(918) 00:11:28.179 fused_ordering(919) 00:11:28.179 fused_ordering(920) 00:11:28.179 fused_ordering(921) 00:11:28.179 fused_ordering(922) 00:11:28.179 fused_ordering(923) 00:11:28.179 fused_ordering(924) 00:11:28.179 fused_ordering(925) 00:11:28.179 fused_ordering(926) 00:11:28.179 fused_ordering(927) 00:11:28.179 fused_ordering(928) 00:11:28.179 fused_ordering(929) 00:11:28.179 fused_ordering(930) 00:11:28.179 fused_ordering(931) 00:11:28.179 fused_ordering(932) 00:11:28.179 fused_ordering(933) 00:11:28.179 fused_ordering(934) 00:11:28.179 fused_ordering(935) 00:11:28.179 fused_ordering(936) 00:11:28.179 fused_ordering(937) 00:11:28.179 fused_ordering(938) 00:11:28.179 fused_ordering(939) 00:11:28.179 fused_ordering(940) 00:11:28.179 fused_ordering(941) 00:11:28.179 fused_ordering(942) 00:11:28.179 fused_ordering(943) 00:11:28.179 fused_ordering(944) 00:11:28.179 fused_ordering(945) 00:11:28.179 fused_ordering(946) 00:11:28.179 fused_ordering(947) 00:11:28.179 fused_ordering(948) 00:11:28.179 fused_ordering(949) 00:11:28.179 fused_ordering(950) 00:11:28.179 fused_ordering(951) 00:11:28.179 fused_ordering(952) 00:11:28.179 fused_ordering(953) 00:11:28.179 fused_ordering(954) 00:11:28.179 fused_ordering(955) 00:11:28.179 fused_ordering(956) 00:11:28.179 fused_ordering(957) 00:11:28.179 fused_ordering(958) 00:11:28.179 fused_ordering(959) 00:11:28.179 fused_ordering(960) 00:11:28.179 fused_ordering(961) 00:11:28.179 fused_ordering(962) 00:11:28.179 fused_ordering(963) 00:11:28.179 fused_ordering(964) 00:11:28.179 fused_ordering(965) 00:11:28.179 fused_ordering(966) 00:11:28.179 fused_ordering(967) 00:11:28.179 fused_ordering(968) 00:11:28.179 fused_ordering(969) 00:11:28.179 fused_ordering(970) 00:11:28.179 fused_ordering(971) 00:11:28.179 fused_ordering(972) 00:11:28.179 fused_ordering(973) 00:11:28.179 fused_ordering(974) 00:11:28.179 fused_ordering(975) 00:11:28.179 fused_ordering(976) 00:11:28.179 fused_ordering(977) 00:11:28.179 fused_ordering(978) 00:11:28.179 fused_ordering(979) 00:11:28.179 fused_ordering(980) 00:11:28.179 fused_ordering(981) 00:11:28.179 fused_ordering(982) 00:11:28.179 fused_ordering(983) 00:11:28.179 fused_ordering(984) 00:11:28.179 fused_ordering(985) 00:11:28.179 fused_ordering(986) 00:11:28.179 fused_ordering(987) 00:11:28.179 fused_ordering(988) 00:11:28.179 fused_ordering(989) 00:11:28.179 fused_ordering(990) 00:11:28.179 fused_ordering(991) 00:11:28.179 fused_ordering(992) 00:11:28.179 fused_ordering(993) 00:11:28.179 fused_ordering(994) 00:11:28.180 fused_ordering(995) 00:11:28.180 fused_ordering(996) 00:11:28.180 fused_ordering(997) 00:11:28.180 fused_ordering(998) 00:11:28.180 fused_ordering(999) 00:11:28.180 fused_ordering(1000) 00:11:28.180 fused_ordering(1001) 00:11:28.180 fused_ordering(1002) 00:11:28.180 fused_ordering(1003) 00:11:28.180 fused_ordering(1004) 00:11:28.180 fused_ordering(1005) 00:11:28.180 fused_ordering(1006) 00:11:28.180 fused_ordering(1007) 00:11:28.180 fused_ordering(1008) 00:11:28.180 fused_ordering(1009) 00:11:28.180 fused_ordering(1010) 00:11:28.180 fused_ordering(1011) 00:11:28.180 fused_ordering(1012) 00:11:28.180 fused_ordering(1013) 00:11:28.180 fused_ordering(1014) 00:11:28.180 fused_ordering(1015) 00:11:28.180 fused_ordering(1016) 00:11:28.180 fused_ordering(1017) 00:11:28.180 fused_ordering(1018) 00:11:28.180 fused_ordering(1019) 00:11:28.180 fused_ordering(1020) 00:11:28.180 fused_ordering(1021) 00:11:28.180 fused_ordering(1022) 00:11:28.180 fused_ordering(1023) 00:11:28.180 22:08:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:28.180 22:08:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:28.180 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:28.180 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:28.180 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:28.180 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:28.180 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:28.180 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:28.439 rmmod nvme_tcp 00:11:28.439 rmmod nvme_fabrics 00:11:28.439 rmmod nvme_keyring 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2658367 ']' 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2658367 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2658367 ']' 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2658367 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2658367 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2658367' 00:11:28.439 killing process with pid 2658367 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2658367 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2658367 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.439 22:08:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.985 22:08:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:30.985 00:11:30.985 real 0m13.378s 00:11:30.985 user 0m7.196s 00:11:30.985 sys 0m7.333s 00:11:30.985 22:08:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:30.985 22:08:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.985 ************************************ 00:11:30.985 END TEST nvmf_fused_ordering 00:11:30.985 ************************************ 00:11:30.985 22:08:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:30.985 22:08:55 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:30.985 22:08:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:30.985 22:08:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.985 22:08:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:30.985 ************************************ 00:11:30.985 START TEST nvmf_delete_subsystem 00:11:30.985 ************************************ 00:11:30.985 22:08:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:30.985 * Looking for test storage... 00:11:30.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.985 22:08:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.985 22:08:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:30.985 22:08:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.985 22:08:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.986 22:08:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.986 22:08:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.986 22:08:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.986 22:08:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.986 22:08:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.986 22:08:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.986 22:08:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.986 22:08:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.986 22:08:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:30.986 22:08:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:30.986 22:08:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:37.576 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:37.576 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:37.576 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:37.576 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:37.577 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.577 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.839 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.839 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.839 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:37.839 22:09:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:37.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:11:37.839 00:11:37.839 --- 10.0.0.2 ping statistics --- 00:11:37.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.839 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.391 ms 00:11:37.839 00:11:37.839 --- 10.0.0.1 ping statistics --- 00:11:37.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.839 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2663281 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2663281 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2663281 ']' 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:37.839 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.113 [2024-07-15 22:09:03.185487] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:11:38.113 [2024-07-15 22:09:03.185553] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.113 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.113 [2024-07-15 22:09:03.258803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:38.113 [2024-07-15 22:09:03.334296] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.113 [2024-07-15 22:09:03.334337] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.113 [2024-07-15 22:09:03.334345] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.113 [2024-07-15 22:09:03.334351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.113 [2024-07-15 22:09:03.334357] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.113 [2024-07-15 22:09:03.334520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.113 [2024-07-15 22:09:03.334521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.686 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:38.686 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:38.686 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:38.686 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:38.686 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.686 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.686 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:38.686 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.687 22:09:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.687 [2024-07-15 22:09:04.002483] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.687 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.687 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:38.687 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.687 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.947 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.947 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.947 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.947 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.947 [2024-07-15 22:09:04.026653] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.947 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.947 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:38.947 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.947 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.947 NULL1 00:11:38.947 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.947 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:38.948 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.948 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.948 Delay0 00:11:38.948 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.948 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.948 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.948 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.948 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.948 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2663515 00:11:38.948 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:38.948 22:09:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:38.948 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.948 [2024-07-15 22:09:04.123243] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:40.862 22:09:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.862 22:09:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.862 22:09:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.122 Read completed with error (sct=0, sc=8) 00:11:41.122 Read completed with error (sct=0, sc=8) 00:11:41.122 starting I/O failed: -6 00:11:41.122 Read completed with error (sct=0, sc=8) 00:11:41.122 Read completed with error (sct=0, sc=8) 00:11:41.122 Write completed with error (sct=0, sc=8) 00:11:41.122 Read completed with error (sct=0, sc=8) 00:11:41.122 starting I/O failed: -6 00:11:41.122 Read completed with error (sct=0, sc=8) 00:11:41.122 Read completed with error (sct=0, sc=8) 00:11:41.122 Write completed with error (sct=0, sc=8) 00:11:41.122 Read completed with error (sct=0, sc=8) 00:11:41.122 starting I/O failed: -6 00:11:41.122 Write completed with error (sct=0, sc=8) 00:11:41.122 Read completed with error (sct=0, sc=8) 00:11:41.122 Read completed with error (sct=0, sc=8) 00:11:41.122 Read completed with error (sct=0, sc=8) 00:11:41.122 starting I/O failed: -6 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 [2024-07-15 22:09:06.289460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e5c0 is same with the state(5) to be set 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 starting I/O failed: -6 00:11:41.123 [2024-07-15 22:09:06.293109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1210000c00 is same with the state(5) to be set 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Read completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:41.123 Write completed with error (sct=0, sc=8) 00:11:42.130 [2024-07-15 22:09:07.262260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98fac0 is same with the state(5) to be set 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Write completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Write completed with error (sct=0, sc=8) 00:11:42.130 [2024-07-15 22:09:07.293049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e7a0 is same with the state(5) to be set 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Write completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Write completed with error (sct=0, sc=8) 00:11:42.130 Write completed with error (sct=0, sc=8) 00:11:42.130 Write completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 [2024-07-15 22:09:07.293397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e3e0 is same with the state(5) to be set 00:11:42.130 Write completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Write completed with error (sct=0, sc=8) 00:11:42.130 Write completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Write completed with error (sct=0, sc=8) 00:11:42.130 Write completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Write completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 [2024-07-15 22:09:07.295769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f121000d740 is same with the state(5) to be set 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Write completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Write completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Write completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 Read completed with error (sct=0, sc=8) 00:11:42.130 [2024-07-15 22:09:07.295870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f121000cfe0 is same with the state(5) to be set 00:11:42.130 Initializing NVMe Controllers 00:11:42.130 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:42.130 Controller IO queue size 128, less than required. 00:11:42.130 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:42.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:42.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:42.130 Initialization complete. Launching workers. 00:11:42.130 ======================================================== 00:11:42.130 Latency(us) 00:11:42.130 Device Information : IOPS MiB/s Average min max 00:11:42.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.75 0.08 904582.03 219.07 1007910.71 00:11:42.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.80 0.08 930967.12 286.74 1011060.50 00:11:42.130 ======================================================== 00:11:42.130 Total : 320.55 0.16 917323.90 219.07 1011060.50 00:11:42.130 00:11:42.130 [2024-07-15 22:09:07.296571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98fac0 (9): Bad file descriptor 00:11:42.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:42.130 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.130 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:42.130 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2663515 00:11:42.130 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2663515 00:11:42.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2663515) - No such process 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2663515 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2663515 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2663515 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.701 [2024-07-15 22:09:07.828329] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2664196 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2664196 00:11:42.701 22:09:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:42.701 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.701 [2024-07-15 22:09:07.904317] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:43.271 22:09:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:43.271 22:09:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2664196 00:11:43.271 22:09:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:43.532 22:09:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:43.532 22:09:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2664196 00:11:43.793 22:09:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:44.053 22:09:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:44.053 22:09:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2664196 00:11:44.053 22:09:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:44.624 22:09:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:44.624 22:09:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2664196 00:11:44.624 22:09:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:45.194 22:09:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:45.194 22:09:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2664196 00:11:45.195 22:09:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:45.766 22:09:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:45.766 22:09:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2664196 00:11:45.766 22:09:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:46.027 Initializing NVMe Controllers 00:11:46.027 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:46.027 Controller IO queue size 128, less than required. 00:11:46.027 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:46.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:46.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:46.027 Initialization complete. Launching workers. 00:11:46.027 ======================================================== 00:11:46.027 Latency(us) 00:11:46.027 Device Information : IOPS MiB/s Average min max 00:11:46.027 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002264.91 1000128.90 1007680.65 00:11:46.027 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003026.49 1000380.17 1009890.14 00:11:46.027 ======================================================== 00:11:46.027 Total : 256.00 0.12 1002645.70 1000128.90 1009890.14 00:11:46.027 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2664196 00:11:46.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2664196) - No such process 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2664196 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:46.288 rmmod nvme_tcp 00:11:46.288 rmmod nvme_fabrics 00:11:46.288 rmmod nvme_keyring 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2663281 ']' 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2663281 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2663281 ']' 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2663281 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2663281 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2663281' 00:11:46.288 killing process with pid 2663281 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2663281 00:11:46.288 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2663281 00:11:46.549 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:46.549 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:46.549 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:46.549 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.549 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:46.549 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.549 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.549 22:09:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.459 22:09:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:48.459 00:11:48.459 real 0m17.828s 00:11:48.459 user 0m30.866s 00:11:48.459 sys 0m6.147s 00:11:48.459 22:09:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.459 22:09:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:48.459 ************************************ 00:11:48.459 END TEST nvmf_delete_subsystem 00:11:48.459 ************************************ 00:11:48.459 22:09:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:48.460 22:09:13 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:48.460 22:09:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:48.460 22:09:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.460 22:09:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:48.720 ************************************ 00:11:48.720 START TEST nvmf_ns_masking 00:11:48.720 ************************************ 00:11:48.720 22:09:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:48.720 * Looking for test storage... 00:11:48.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.720 22:09:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.720 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:48.720 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.720 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.720 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.720 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.720 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.720 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.720 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.720 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.720 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.720 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.720 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f8088e19-9264-46d1-aa9a-b19325a65287 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=98aadda4-26e9-4f55-8e31-f67f982227a4 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6e5ee7f4-5db1-47b1-b0da-5bd0e2516558 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:48.721 22:09:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.875 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:56.876 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:56.876 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:56.876 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:56.876 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.876 22:09:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:56.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:11:56.876 00:11:56.876 --- 10.0.0.2 ping statistics --- 00:11:56.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.876 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:11:56.876 00:11:56.876 --- 10.0.0.1 ping statistics --- 00:11:56.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.876 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2669668 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2669668 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2669668 ']' 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:56.876 [2024-07-15 22:09:21.127063] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:11:56.876 [2024-07-15 22:09:21.127112] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.876 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.876 [2024-07-15 22:09:21.191525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.876 [2024-07-15 22:09:21.255321] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.876 [2024-07-15 22:09:21.255355] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.876 [2024-07-15 22:09:21.255362] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.876 [2024-07-15 22:09:21.255369] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.876 [2024-07-15 22:09:21.255374] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.876 [2024-07-15 22:09:21.255394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.876 22:09:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:56.876 [2024-07-15 22:09:22.082435] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.876 22:09:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:56.876 22:09:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:56.876 22:09:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:57.137 Malloc1 00:11:57.137 22:09:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:57.137 Malloc2 00:11:57.137 22:09:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:57.397 22:09:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:57.657 22:09:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.657 [2024-07-15 22:09:22.865084] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.657 22:09:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:57.657 22:09:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6e5ee7f4-5db1-47b1-b0da-5bd0e2516558 -a 10.0.0.2 -s 4420 -i 4 00:11:57.917 22:09:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.917 22:09:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:57.917 22:09:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.917 22:09:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:57.917 22:09:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:59.828 [ 0]:0x1 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c1c2c975b08d4816952fe96491306b7c 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c1c2c975b08d4816952fe96491306b7c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:59.828 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:00.087 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:00.087 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.088 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:00.088 [ 0]:0x1 00:12:00.088 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.088 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:00.088 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c1c2c975b08d4816952fe96491306b7c 00:12:00.088 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c1c2c975b08d4816952fe96491306b7c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.088 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:00.088 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.088 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:00.088 [ 1]:0x2 00:12:00.088 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:00.088 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.088 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd52b7442233480c9670b526f4a42cfe 00:12:00.088 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd52b7442233480c9670b526f4a42cfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.088 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:00.088 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.348 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.608 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:00.608 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:00.608 22:09:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6e5ee7f4-5db1-47b1-b0da-5bd0e2516558 -a 10.0.0.2 -s 4420 -i 4 00:12:00.868 22:09:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:00.868 22:09:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:00.868 22:09:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.868 22:09:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:00.869 22:09:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:00.869 22:09:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:02.782 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:02.782 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:02.782 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.782 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:02.782 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.782 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:02.782 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:02.782 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:03.043 [ 0]:0x2 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd52b7442233480c9670b526f4a42cfe 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd52b7442233480c9670b526f4a42cfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.043 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:03.304 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:03.304 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.304 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:03.304 [ 0]:0x1 00:12:03.304 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.304 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.304 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c1c2c975b08d4816952fe96491306b7c 00:12:03.304 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c1c2c975b08d4816952fe96491306b7c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.304 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:03.304 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.304 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:03.304 [ 1]:0x2 00:12:03.304 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:03.304 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.304 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd52b7442233480c9670b526f4a42cfe 00:12:03.304 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd52b7442233480c9670b526f4a42cfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.304 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:03.564 [ 0]:0x2 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd52b7442233480c9670b526f4a42cfe 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd52b7442233480c9670b526f4a42cfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:03.564 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.825 22:09:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:03.825 22:09:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:03.825 22:09:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6e5ee7f4-5db1-47b1-b0da-5bd0e2516558 -a 10.0.0.2 -s 4420 -i 4 00:12:04.086 22:09:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:04.086 22:09:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:04.086 22:09:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.086 22:09:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:04.086 22:09:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:04.086 22:09:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:06.665 [ 0]:0x1 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c1c2c975b08d4816952fe96491306b7c 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c1c2c975b08d4816952fe96491306b7c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:06.665 [ 1]:0x2 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd52b7442233480c9670b526f4a42cfe 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd52b7442233480c9670b526f4a42cfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:06.665 [ 0]:0x2 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd52b7442233480c9670b526f4a42cfe 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd52b7442233480c9670b526f4a42cfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:06.665 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.666 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.666 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.666 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.666 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.666 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.666 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.666 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:06.666 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:06.666 [2024-07-15 22:09:31.959212] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:06.666 request: 00:12:06.666 { 00:12:06.666 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.666 "nsid": 2, 00:12:06.666 "host": "nqn.2016-06.io.spdk:host1", 00:12:06.666 "method": "nvmf_ns_remove_host", 00:12:06.666 "req_id": 1 00:12:06.666 } 00:12:06.666 Got JSON-RPC error response 00:12:06.666 response: 00:12:06.666 { 00:12:06.666 "code": -32602, 00:12:06.666 "message": "Invalid parameters" 00:12:06.666 } 00:12:06.927 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:06.927 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:06.927 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:06.927 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:06.927 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:06.927 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:06.927 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:06.927 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:06.927 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.927 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:06.927 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.927 22:09:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:06.927 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:06.927 22:09:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:06.927 [ 0]:0x2 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd52b7442233480c9670b526f4a42cfe 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd52b7442233480c9670b526f4a42cfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2671871 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2671871 /var/tmp/host.sock 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2671871 ']' 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:06.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:06.927 22:09:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:06.927 [2024-07-15 22:09:32.222368] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:12:06.927 [2024-07-15 22:09:32.222425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671871 ] 00:12:06.927 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.187 [2024-07-15 22:09:32.298487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.187 [2024-07-15 22:09:32.362678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.755 22:09:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:07.755 22:09:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:07.755 22:09:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.014 22:09:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:08.014 22:09:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f8088e19-9264-46d1-aa9a-b19325a65287 00:12:08.014 22:09:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:08.014 22:09:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F8088E19926446D1AA9AB19325A65287 -i 00:12:08.274 22:09:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 98aadda4-26e9-4f55-8e31-f67f982227a4 00:12:08.274 22:09:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:08.274 22:09:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 98AADDA426E94F558E31F67F982227A4 -i 00:12:08.533 22:09:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:08.533 22:09:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:08.794 22:09:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:08.794 22:09:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:09.054 nvme0n1 00:12:09.054 22:09:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:09.054 22:09:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:09.314 nvme1n2 00:12:09.314 22:09:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:09.314 22:09:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:09.314 22:09:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:09.314 22:09:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:09.314 22:09:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:09.575 22:09:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:09.575 22:09:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:09.575 22:09:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:09.575 22:09:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:09.836 22:09:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f8088e19-9264-46d1-aa9a-b19325a65287 == \f\8\0\8\8\e\1\9\-\9\2\6\4\-\4\6\d\1\-\a\a\9\a\-\b\1\9\3\2\5\a\6\5\2\8\7 ]] 00:12:09.836 22:09:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:09.836 22:09:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:09.836 22:09:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:09.836 22:09:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 98aadda4-26e9-4f55-8e31-f67f982227a4 == \9\8\a\a\d\d\a\4\-\2\6\e\9\-\4\f\5\5\-\8\e\3\1\-\f\6\7\f\9\8\2\2\2\7\a\4 ]] 00:12:09.836 22:09:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2671871 00:12:09.836 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2671871 ']' 00:12:09.836 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2671871 00:12:09.836 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:09.836 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:09.836 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2671871 00:12:10.097 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:10.097 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:10.097 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2671871' 00:12:10.097 killing process with pid 2671871 00:12:10.097 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2671871 00:12:10.097 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2671871 00:12:10.097 22:09:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:10.358 rmmod nvme_tcp 00:12:10.358 rmmod nvme_fabrics 00:12:10.358 rmmod nvme_keyring 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2669668 ']' 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2669668 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2669668 ']' 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2669668 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2669668 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2669668' 00:12:10.358 killing process with pid 2669668 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2669668 00:12:10.358 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2669668 00:12:10.619 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:10.619 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:10.619 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:10.619 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:10.619 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:10.619 22:09:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.619 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.619 22:09:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.168 22:09:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:13.168 00:12:13.168 real 0m24.105s 00:12:13.168 user 0m24.314s 00:12:13.168 sys 0m7.139s 00:12:13.168 22:09:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:13.168 22:09:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:13.168 ************************************ 00:12:13.168 END TEST nvmf_ns_masking 00:12:13.168 ************************************ 00:12:13.168 22:09:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:13.168 22:09:37 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:13.168 22:09:37 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:13.168 22:09:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:13.168 22:09:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.168 22:09:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:13.168 ************************************ 00:12:13.168 START TEST nvmf_nvme_cli 00:12:13.168 ************************************ 00:12:13.168 22:09:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:13.168 * Looking for test storage... 00:12:13.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:13.168 22:09:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:19.756 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:19.756 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.756 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:19.757 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:19.757 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.757 22:09:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:19.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:12:19.757 00:12:19.757 --- 10.0.0.2 ping statistics --- 00:12:19.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.757 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:12:19.757 00:12:19.757 --- 10.0.0.1 ping statistics --- 00:12:19.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.757 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2676872 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2676872 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2676872 ']' 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.757 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.018 [2024-07-15 22:09:45.115369] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:12:20.018 [2024-07-15 22:09:45.115431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.018 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.018 [2024-07-15 22:09:45.187111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.018 [2024-07-15 22:09:45.262665] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.018 [2024-07-15 22:09:45.262702] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.018 [2024-07-15 22:09:45.262710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.018 [2024-07-15 22:09:45.262716] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.018 [2024-07-15 22:09:45.262722] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.018 [2024-07-15 22:09:45.262864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.018 [2024-07-15 22:09:45.262986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.018 [2024-07-15 22:09:45.263165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.018 [2024-07-15 22:09:45.263179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.589 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:20.589 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:20.589 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:20.589 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:20.589 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.850 [2024-07-15 22:09:45.940722] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.850 Malloc0 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.850 Malloc1 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.850 22:09:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.850 22:09:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.850 22:09:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:20.850 22:09:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.850 22:09:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.850 22:09:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.850 22:09:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.850 22:09:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.850 22:09:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.850 [2024-07-15 22:09:46.030560] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.850 22:09:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.850 22:09:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:20.850 22:09:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.850 22:09:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.850 22:09:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.851 22:09:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:20.851 00:12:20.851 Discovery Log Number of Records 2, Generation counter 2 00:12:20.851 =====Discovery Log Entry 0====== 00:12:20.851 trtype: tcp 00:12:20.851 adrfam: ipv4 00:12:20.851 subtype: current discovery subsystem 00:12:20.851 treq: not required 00:12:20.851 portid: 0 00:12:20.851 trsvcid: 4420 00:12:20.851 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:20.851 traddr: 10.0.0.2 00:12:20.851 eflags: explicit discovery connections, duplicate discovery information 00:12:20.851 sectype: none 00:12:20.851 =====Discovery Log Entry 1====== 00:12:20.851 trtype: tcp 00:12:20.851 adrfam: ipv4 00:12:20.851 subtype: nvme subsystem 00:12:20.851 treq: not required 00:12:20.851 portid: 0 00:12:20.851 trsvcid: 4420 00:12:20.851 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:20.851 traddr: 10.0.0.2 00:12:20.851 eflags: none 00:12:20.851 sectype: none 00:12:20.851 22:09:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:20.851 22:09:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:20.851 22:09:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:20.851 22:09:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:20.851 22:09:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:20.851 22:09:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:20.851 22:09:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:20.851 22:09:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:20.851 22:09:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:20.851 22:09:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:20.851 22:09:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.763 22:09:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:22.763 22:09:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:22.763 22:09:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.763 22:09:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:22.763 22:09:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:22.763 22:09:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:24.673 /dev/nvme0n1 ]] 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:24.673 rmmod nvme_tcp 00:12:24.673 rmmod nvme_fabrics 00:12:24.673 rmmod nvme_keyring 00:12:24.673 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:24.933 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:24.933 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:24.933 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2676872 ']' 00:12:24.933 22:09:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2676872 00:12:24.933 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2676872 ']' 00:12:24.933 22:09:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2676872 00:12:24.933 22:09:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:24.933 22:09:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:24.933 22:09:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2676872 00:12:24.933 22:09:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:24.933 22:09:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:24.933 22:09:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2676872' 00:12:24.933 killing process with pid 2676872 00:12:24.933 22:09:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2676872 00:12:24.933 22:09:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2676872 00:12:24.933 22:09:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:24.933 22:09:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:24.933 22:09:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:24.933 22:09:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.933 22:09:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:24.934 22:09:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.934 22:09:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.934 22:09:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.473 22:09:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:27.473 00:12:27.473 real 0m14.312s 00:12:27.473 user 0m21.811s 00:12:27.473 sys 0m5.708s 00:12:27.473 22:09:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:27.473 22:09:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:27.473 ************************************ 00:12:27.473 END TEST nvmf_nvme_cli 00:12:27.474 ************************************ 00:12:27.474 22:09:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:27.474 22:09:52 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:27.474 22:09:52 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:27.474 22:09:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:27.474 22:09:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.474 22:09:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:27.474 ************************************ 00:12:27.474 START TEST nvmf_vfio_user 00:12:27.474 ************************************ 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:27.474 * Looking for test storage... 00:12:27.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2678359 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2678359' 00:12:27.474 Process pid: 2678359 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2678359 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2678359 ']' 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:27.474 22:09:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:27.474 [2024-07-15 22:09:52.566601] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:12:27.474 [2024-07-15 22:09:52.566659] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.474 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.474 [2024-07-15 22:09:52.628171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.474 [2024-07-15 22:09:52.698272] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.474 [2024-07-15 22:09:52.698306] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.474 [2024-07-15 22:09:52.698314] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.474 [2024-07-15 22:09:52.698321] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.474 [2024-07-15 22:09:52.698327] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.474 [2024-07-15 22:09:52.698408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.474 [2024-07-15 22:09:52.698538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.474 [2024-07-15 22:09:52.698696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.474 [2024-07-15 22:09:52.698698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.044 22:09:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:28.044 22:09:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:28.044 22:09:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:29.427 22:09:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:29.427 22:09:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:29.427 22:09:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:29.427 22:09:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:29.427 22:09:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:29.427 22:09:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:29.427 Malloc1 00:12:29.427 22:09:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:29.688 22:09:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:29.976 22:09:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:29.976 22:09:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:29.976 22:09:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:29.976 22:09:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:30.237 Malloc2 00:12:30.237 22:09:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:30.237 22:09:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:30.497 22:09:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:30.761 22:09:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:30.761 22:09:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:30.761 22:09:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:30.761 22:09:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:30.761 22:09:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:30.761 22:09:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:30.761 [2024-07-15 22:09:55.912897] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:12:30.761 [2024-07-15 22:09:55.912942] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679052 ] 00:12:30.761 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.761 [2024-07-15 22:09:55.944752] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:30.761 [2024-07-15 22:09:55.953212] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:30.761 [2024-07-15 22:09:55.953230] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f542e26b000 00:12:30.761 [2024-07-15 22:09:55.954210] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:30.761 [2024-07-15 22:09:55.955215] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:30.761 [2024-07-15 22:09:55.956227] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:30.761 [2024-07-15 22:09:55.957230] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:30.761 [2024-07-15 22:09:55.958233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:30.761 [2024-07-15 22:09:55.959243] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:30.761 [2024-07-15 22:09:55.960243] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:30.761 [2024-07-15 22:09:55.961249] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:30.761 [2024-07-15 22:09:55.962262] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:30.761 [2024-07-15 22:09:55.962271] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f542e260000 00:12:30.761 [2024-07-15 22:09:55.963604] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:30.761 [2024-07-15 22:09:55.985290] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:30.761 [2024-07-15 22:09:55.985310] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:30.761 [2024-07-15 22:09:55.987392] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:30.761 [2024-07-15 22:09:55.987435] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:30.761 [2024-07-15 22:09:55.987516] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:30.761 [2024-07-15 22:09:55.987532] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:30.761 [2024-07-15 22:09:55.987538] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:30.761 [2024-07-15 22:09:55.988390] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:30.761 [2024-07-15 22:09:55.988399] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:30.761 [2024-07-15 22:09:55.988406] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:30.761 [2024-07-15 22:09:55.989403] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:30.761 [2024-07-15 22:09:55.989411] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:30.761 [2024-07-15 22:09:55.989419] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:30.761 [2024-07-15 22:09:55.990408] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:30.761 [2024-07-15 22:09:55.990416] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:30.761 [2024-07-15 22:09:55.991412] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:30.761 [2024-07-15 22:09:55.991420] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:30.761 [2024-07-15 22:09:55.991425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:30.761 [2024-07-15 22:09:55.991431] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:30.761 [2024-07-15 22:09:55.991537] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:30.761 [2024-07-15 22:09:55.991542] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:30.761 [2024-07-15 22:09:55.991547] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:30.761 [2024-07-15 22:09:55.992417] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:30.761 [2024-07-15 22:09:55.993419] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:30.761 [2024-07-15 22:09:55.994428] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:30.761 [2024-07-15 22:09:55.995424] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:30.761 [2024-07-15 22:09:55.995478] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:30.761 [2024-07-15 22:09:55.996428] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:30.761 [2024-07-15 22:09:55.996436] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:30.761 [2024-07-15 22:09:55.996441] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:30.761 [2024-07-15 22:09:55.996462] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:30.761 [2024-07-15 22:09:55.996474] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:30.761 [2024-07-15 22:09:55.996488] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:30.761 [2024-07-15 22:09:55.996494] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:30.761 [2024-07-15 22:09:55.996507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:30.761 [2024-07-15 22:09:55.996542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:30.761 [2024-07-15 22:09:55.996550] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:30.761 [2024-07-15 22:09:55.996557] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:30.761 [2024-07-15 22:09:55.996561] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:30.761 [2024-07-15 22:09:55.996566] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:30.761 [2024-07-15 22:09:55.996571] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:30.761 [2024-07-15 22:09:55.996575] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:30.761 [2024-07-15 22:09:55.996580] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:30.761 [2024-07-15 22:09:55.996587] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:30.761 [2024-07-15 22:09:55.996596] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:30.761 [2024-07-15 22:09:55.996607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:30.762 [2024-07-15 22:09:55.996619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.762 [2024-07-15 22:09:55.996628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.762 [2024-07-15 22:09:55.996636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.762 [2024-07-15 22:09:55.996648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.762 [2024-07-15 22:09:55.996653] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:30.762 [2024-07-15 22:09:55.996678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:30.762 [2024-07-15 22:09:55.996683] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:30.762 [2024-07-15 22:09:55.996688] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996694] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996700] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996709] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:30.762 [2024-07-15 22:09:55.996718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:30.762 [2024-07-15 22:09:55.996778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996793] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:30.762 [2024-07-15 22:09:55.996798] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:30.762 [2024-07-15 22:09:55.996804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:30.762 [2024-07-15 22:09:55.996819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:30.762 [2024-07-15 22:09:55.996828] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:30.762 [2024-07-15 22:09:55.996836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996850] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:30.762 [2024-07-15 22:09:55.996854] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:30.762 [2024-07-15 22:09:55.996860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:30.762 [2024-07-15 22:09:55.996876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:30.762 [2024-07-15 22:09:55.996887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996896] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996903] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:30.762 [2024-07-15 22:09:55.996907] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:30.762 [2024-07-15 22:09:55.996913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:30.762 [2024-07-15 22:09:55.996922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:30.762 [2024-07-15 22:09:55.996930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996943] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996954] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996964] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:30.762 [2024-07-15 22:09:55.996969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:30.762 [2024-07-15 22:09:55.996973] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:30.762 [2024-07-15 22:09:55.996991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:30.762 [2024-07-15 22:09:55.997000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:30.762 [2024-07-15 22:09:55.997011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:30.762 [2024-07-15 22:09:55.997018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:30.762 [2024-07-15 22:09:55.997029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:30.762 [2024-07-15 22:09:55.997036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:30.762 [2024-07-15 22:09:55.997047] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:30.762 [2024-07-15 22:09:55.997059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:30.762 [2024-07-15 22:09:55.997072] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:30.762 [2024-07-15 22:09:55.997077] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:30.762 [2024-07-15 22:09:55.997080] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:30.762 [2024-07-15 22:09:55.997084] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:30.762 [2024-07-15 22:09:55.997091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:30.762 [2024-07-15 22:09:55.997099] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:30.762 [2024-07-15 22:09:55.997103] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:30.762 [2024-07-15 22:09:55.997109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:30.762 [2024-07-15 22:09:55.997116] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:30.762 [2024-07-15 22:09:55.997120] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:30.762 [2024-07-15 22:09:55.997131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:30.762 [2024-07-15 22:09:55.997139] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:30.762 [2024-07-15 22:09:55.997143] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:30.762 [2024-07-15 22:09:55.997148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:30.762 [2024-07-15 22:09:55.997156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:30.762 [2024-07-15 22:09:55.997168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:30.762 [2024-07-15 22:09:55.997179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:30.762 [2024-07-15 22:09:55.997186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:30.762 ===================================================== 00:12:30.762 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:30.762 ===================================================== 00:12:30.762 Controller Capabilities/Features 00:12:30.762 ================================ 00:12:30.762 Vendor ID: 4e58 00:12:30.763 Subsystem Vendor ID: 4e58 00:12:30.763 Serial Number: SPDK1 00:12:30.763 Model Number: SPDK bdev Controller 00:12:30.763 Firmware Version: 24.09 00:12:30.763 Recommended Arb Burst: 6 00:12:30.763 IEEE OUI Identifier: 8d 6b 50 00:12:30.763 Multi-path I/O 00:12:30.763 May have multiple subsystem ports: Yes 00:12:30.763 May have multiple controllers: Yes 00:12:30.763 Associated with SR-IOV VF: No 00:12:30.763 Max Data Transfer Size: 131072 00:12:30.763 Max Number of Namespaces: 32 00:12:30.763 Max Number of I/O Queues: 127 00:12:30.763 NVMe Specification Version (VS): 1.3 00:12:30.763 NVMe Specification Version (Identify): 1.3 00:12:30.763 Maximum Queue Entries: 256 00:12:30.763 Contiguous Queues Required: Yes 00:12:30.763 Arbitration Mechanisms Supported 00:12:30.763 Weighted Round Robin: Not Supported 00:12:30.763 Vendor Specific: Not Supported 00:12:30.763 Reset Timeout: 15000 ms 00:12:30.763 Doorbell Stride: 4 bytes 00:12:30.763 NVM Subsystem Reset: Not Supported 00:12:30.763 Command Sets Supported 00:12:30.763 NVM Command Set: Supported 00:12:30.763 Boot Partition: Not Supported 00:12:30.763 Memory Page Size Minimum: 4096 bytes 00:12:30.763 Memory Page Size Maximum: 4096 bytes 00:12:30.763 Persistent Memory Region: Not Supported 00:12:30.763 Optional Asynchronous Events Supported 00:12:30.763 Namespace Attribute Notices: Supported 00:12:30.763 Firmware Activation Notices: Not Supported 00:12:30.763 ANA Change Notices: Not Supported 00:12:30.763 PLE Aggregate Log Change Notices: Not Supported 00:12:30.763 LBA Status Info Alert Notices: Not Supported 00:12:30.763 EGE Aggregate Log Change Notices: Not Supported 00:12:30.763 Normal NVM Subsystem Shutdown event: Not Supported 00:12:30.763 Zone Descriptor Change Notices: Not Supported 00:12:30.763 Discovery Log Change Notices: Not Supported 00:12:30.763 Controller Attributes 00:12:30.763 128-bit Host Identifier: Supported 00:12:30.763 Non-Operational Permissive Mode: Not Supported 00:12:30.763 NVM Sets: Not Supported 00:12:30.763 Read Recovery Levels: Not Supported 00:12:30.763 Endurance Groups: Not Supported 00:12:30.763 Predictable Latency Mode: Not Supported 00:12:30.763 Traffic Based Keep ALive: Not Supported 00:12:30.763 Namespace Granularity: Not Supported 00:12:30.763 SQ Associations: Not Supported 00:12:30.763 UUID List: Not Supported 00:12:30.763 Multi-Domain Subsystem: Not Supported 00:12:30.763 Fixed Capacity Management: Not Supported 00:12:30.763 Variable Capacity Management: Not Supported 00:12:30.763 Delete Endurance Group: Not Supported 00:12:30.763 Delete NVM Set: Not Supported 00:12:30.763 Extended LBA Formats Supported: Not Supported 00:12:30.763 Flexible Data Placement Supported: Not Supported 00:12:30.763 00:12:30.763 Controller Memory Buffer Support 00:12:30.763 ================================ 00:12:30.763 Supported: No 00:12:30.763 00:12:30.763 Persistent Memory Region Support 00:12:30.763 ================================ 00:12:30.763 Supported: No 00:12:30.763 00:12:30.763 Admin Command Set Attributes 00:12:30.763 ============================ 00:12:30.763 Security Send/Receive: Not Supported 00:12:30.763 Format NVM: Not Supported 00:12:30.763 Firmware Activate/Download: Not Supported 00:12:30.763 Namespace Management: Not Supported 00:12:30.763 Device Self-Test: Not Supported 00:12:30.763 Directives: Not Supported 00:12:30.763 NVMe-MI: Not Supported 00:12:30.763 Virtualization Management: Not Supported 00:12:30.763 Doorbell Buffer Config: Not Supported 00:12:30.763 Get LBA Status Capability: Not Supported 00:12:30.763 Command & Feature Lockdown Capability: Not Supported 00:12:30.763 Abort Command Limit: 4 00:12:30.763 Async Event Request Limit: 4 00:12:30.763 Number of Firmware Slots: N/A 00:12:30.763 Firmware Slot 1 Read-Only: N/A 00:12:30.763 Firmware Activation Without Reset: N/A 00:12:30.763 Multiple Update Detection Support: N/A 00:12:30.763 Firmware Update Granularity: No Information Provided 00:12:30.763 Per-Namespace SMART Log: No 00:12:30.763 Asymmetric Namespace Access Log Page: Not Supported 00:12:30.763 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:30.763 Command Effects Log Page: Supported 00:12:30.763 Get Log Page Extended Data: Supported 00:12:30.763 Telemetry Log Pages: Not Supported 00:12:30.763 Persistent Event Log Pages: Not Supported 00:12:30.763 Supported Log Pages Log Page: May Support 00:12:30.763 Commands Supported & Effects Log Page: Not Supported 00:12:30.763 Feature Identifiers & Effects Log Page:May Support 00:12:30.763 NVMe-MI Commands & Effects Log Page: May Support 00:12:30.763 Data Area 4 for Telemetry Log: Not Supported 00:12:30.763 Error Log Page Entries Supported: 128 00:12:30.763 Keep Alive: Supported 00:12:30.763 Keep Alive Granularity: 10000 ms 00:12:30.763 00:12:30.763 NVM Command Set Attributes 00:12:30.763 ========================== 00:12:30.763 Submission Queue Entry Size 00:12:30.763 Max: 64 00:12:30.763 Min: 64 00:12:30.763 Completion Queue Entry Size 00:12:30.763 Max: 16 00:12:30.763 Min: 16 00:12:30.763 Number of Namespaces: 32 00:12:30.763 Compare Command: Supported 00:12:30.763 Write Uncorrectable Command: Not Supported 00:12:30.763 Dataset Management Command: Supported 00:12:30.763 Write Zeroes Command: Supported 00:12:30.763 Set Features Save Field: Not Supported 00:12:30.763 Reservations: Not Supported 00:12:30.763 Timestamp: Not Supported 00:12:30.763 Copy: Supported 00:12:30.763 Volatile Write Cache: Present 00:12:30.763 Atomic Write Unit (Normal): 1 00:12:30.763 Atomic Write Unit (PFail): 1 00:12:30.763 Atomic Compare & Write Unit: 1 00:12:30.763 Fused Compare & Write: Supported 00:12:30.763 Scatter-Gather List 00:12:30.763 SGL Command Set: Supported (Dword aligned) 00:12:30.763 SGL Keyed: Not Supported 00:12:30.763 SGL Bit Bucket Descriptor: Not Supported 00:12:30.763 SGL Metadata Pointer: Not Supported 00:12:30.763 Oversized SGL: Not Supported 00:12:30.763 SGL Metadata Address: Not Supported 00:12:30.763 SGL Offset: Not Supported 00:12:30.763 Transport SGL Data Block: Not Supported 00:12:30.763 Replay Protected Memory Block: Not Supported 00:12:30.763 00:12:30.763 Firmware Slot Information 00:12:30.763 ========================= 00:12:30.763 Active slot: 1 00:12:30.763 Slot 1 Firmware Revision: 24.09 00:12:30.763 00:12:30.763 00:12:30.763 Commands Supported and Effects 00:12:30.763 ============================== 00:12:30.763 Admin Commands 00:12:30.763 -------------- 00:12:30.763 Get Log Page (02h): Supported 00:12:30.763 Identify (06h): Supported 00:12:30.763 Abort (08h): Supported 00:12:30.763 Set Features (09h): Supported 00:12:30.763 Get Features (0Ah): Supported 00:12:30.763 Asynchronous Event Request (0Ch): Supported 00:12:30.763 Keep Alive (18h): Supported 00:12:30.763 I/O Commands 00:12:30.763 ------------ 00:12:30.763 Flush (00h): Supported LBA-Change 00:12:30.763 Write (01h): Supported LBA-Change 00:12:30.763 Read (02h): Supported 00:12:30.763 Compare (05h): Supported 00:12:30.763 Write Zeroes (08h): Supported LBA-Change 00:12:30.763 Dataset Management (09h): Supported LBA-Change 00:12:30.763 Copy (19h): Supported LBA-Change 00:12:30.763 00:12:30.763 Error Log 00:12:30.763 ========= 00:12:30.763 00:12:30.763 Arbitration 00:12:30.763 =========== 00:12:30.763 Arbitration Burst: 1 00:12:30.763 00:12:30.763 Power Management 00:12:30.763 ================ 00:12:30.763 Number of Power States: 1 00:12:30.763 Current Power State: Power State #0 00:12:30.763 Power State #0: 00:12:30.763 Max Power: 0.00 W 00:12:30.763 Non-Operational State: Operational 00:12:30.763 Entry Latency: Not Reported 00:12:30.763 Exit Latency: Not Reported 00:12:30.763 Relative Read Throughput: 0 00:12:30.763 Relative Read Latency: 0 00:12:30.763 Relative Write Throughput: 0 00:12:30.763 Relative Write Latency: 0 00:12:30.763 Idle Power: Not Reported 00:12:30.763 Active Power: Not Reported 00:12:30.763 Non-Operational Permissive Mode: Not Supported 00:12:30.763 00:12:30.763 Health Information 00:12:30.763 ================== 00:12:30.763 Critical Warnings: 00:12:30.763 Available Spare Space: OK 00:12:30.763 Temperature: OK 00:12:30.763 Device Reliability: OK 00:12:30.763 Read Only: No 00:12:30.763 Volatile Memory Backup: OK 00:12:30.763 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:30.763 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:30.763 Available Spare: 0% 00:12:30.763 Available Sp[2024-07-15 22:09:55.997289] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:30.763 [2024-07-15 22:09:55.997298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:30.763 [2024-07-15 22:09:55.997327] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:30.763 [2024-07-15 22:09:55.997336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.763 [2024-07-15 22:09:55.997343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.763 [2024-07-15 22:09:55.997349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.764 [2024-07-15 22:09:55.997355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.764 [2024-07-15 22:09:55.997435] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:30.764 [2024-07-15 22:09:55.997444] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:30.764 [2024-07-15 22:09:55.998435] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:30.764 [2024-07-15 22:09:55.998473] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:30.764 [2024-07-15 22:09:55.998480] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:30.764 [2024-07-15 22:09:55.999444] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:30.764 [2024-07-15 22:09:55.999457] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:30.764 [2024-07-15 22:09:55.999516] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:30.764 [2024-07-15 22:09:56.006129] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:30.764 are Threshold: 0% 00:12:30.764 Life Percentage Used: 0% 00:12:30.764 Data Units Read: 0 00:12:30.764 Data Units Written: 0 00:12:30.764 Host Read Commands: 0 00:12:30.764 Host Write Commands: 0 00:12:30.764 Controller Busy Time: 0 minutes 00:12:30.764 Power Cycles: 0 00:12:30.764 Power On Hours: 0 hours 00:12:30.764 Unsafe Shutdowns: 0 00:12:30.764 Unrecoverable Media Errors: 0 00:12:30.764 Lifetime Error Log Entries: 0 00:12:30.764 Warning Temperature Time: 0 minutes 00:12:30.764 Critical Temperature Time: 0 minutes 00:12:30.764 00:12:30.764 Number of Queues 00:12:30.764 ================ 00:12:30.764 Number of I/O Submission Queues: 127 00:12:30.764 Number of I/O Completion Queues: 127 00:12:30.764 00:12:30.764 Active Namespaces 00:12:30.764 ================= 00:12:30.764 Namespace ID:1 00:12:30.764 Error Recovery Timeout: Unlimited 00:12:30.764 Command Set Identifier: NVM (00h) 00:12:30.764 Deallocate: Supported 00:12:30.764 Deallocated/Unwritten Error: Not Supported 00:12:30.764 Deallocated Read Value: Unknown 00:12:30.764 Deallocate in Write Zeroes: Not Supported 00:12:30.764 Deallocated Guard Field: 0xFFFF 00:12:30.764 Flush: Supported 00:12:30.764 Reservation: Supported 00:12:30.764 Namespace Sharing Capabilities: Multiple Controllers 00:12:30.764 Size (in LBAs): 131072 (0GiB) 00:12:30.764 Capacity (in LBAs): 131072 (0GiB) 00:12:30.764 Utilization (in LBAs): 131072 (0GiB) 00:12:30.764 NGUID: 197292EC4E8F425D84674808B2EC9EB9 00:12:30.764 UUID: 197292ec-4e8f-425d-8467-4808b2ec9eb9 00:12:30.764 Thin Provisioning: Not Supported 00:12:30.764 Per-NS Atomic Units: Yes 00:12:30.764 Atomic Boundary Size (Normal): 0 00:12:30.764 Atomic Boundary Size (PFail): 0 00:12:30.764 Atomic Boundary Offset: 0 00:12:30.764 Maximum Single Source Range Length: 65535 00:12:30.764 Maximum Copy Length: 65535 00:12:30.764 Maximum Source Range Count: 1 00:12:30.764 NGUID/EUI64 Never Reused: No 00:12:30.764 Namespace Write Protected: No 00:12:30.764 Number of LBA Formats: 1 00:12:30.764 Current LBA Format: LBA Format #00 00:12:30.764 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:30.764 00:12:30.764 22:09:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:31.025 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.025 [2024-07-15 22:09:56.189759] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:36.310 Initializing NVMe Controllers 00:12:36.310 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:36.310 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:36.310 Initialization complete. Launching workers. 00:12:36.310 ======================================================== 00:12:36.310 Latency(us) 00:12:36.310 Device Information : IOPS MiB/s Average min max 00:12:36.310 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40071.69 156.53 3193.96 835.88 7579.00 00:12:36.310 ======================================================== 00:12:36.310 Total : 40071.69 156.53 3193.96 835.88 7579.00 00:12:36.310 00:12:36.310 [2024-07-15 22:10:01.206764] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:36.310 22:10:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:36.310 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.310 [2024-07-15 22:10:01.389621] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:41.593 Initializing NVMe Controllers 00:12:41.593 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:41.593 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:41.593 Initialization complete. Launching workers. 00:12:41.593 ======================================================== 00:12:41.593 Latency(us) 00:12:41.593 Device Information : IOPS MiB/s Average min max 00:12:41.593 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16041.25 62.66 7978.92 6983.15 8019.47 00:12:41.593 ======================================================== 00:12:41.593 Total : 16041.25 62.66 7978.92 6983.15 8019.47 00:12:41.593 00:12:41.593 [2024-07-15 22:10:06.420987] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:41.593 22:10:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:41.593 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.593 [2024-07-15 22:10:06.615870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:46.877 [2024-07-15 22:10:11.681335] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:46.877 Initializing NVMe Controllers 00:12:46.877 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:46.877 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:46.877 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:46.877 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:46.877 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:46.877 Initialization complete. Launching workers. 00:12:46.877 Starting thread on core 2 00:12:46.877 Starting thread on core 3 00:12:46.877 Starting thread on core 1 00:12:46.877 22:10:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:46.877 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.877 [2024-07-15 22:10:11.935478] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:50.175 [2024-07-15 22:10:14.991817] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:50.176 Initializing NVMe Controllers 00:12:50.176 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:50.176 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:50.176 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:50.176 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:50.176 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:50.176 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:50.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:50.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:50.176 Initialization complete. Launching workers. 00:12:50.176 Starting thread on core 1 with urgent priority queue 00:12:50.176 Starting thread on core 2 with urgent priority queue 00:12:50.176 Starting thread on core 3 with urgent priority queue 00:12:50.176 Starting thread on core 0 with urgent priority queue 00:12:50.176 SPDK bdev Controller (SPDK1 ) core 0: 8224.67 IO/s 12.16 secs/100000 ios 00:12:50.176 SPDK bdev Controller (SPDK1 ) core 1: 12194.00 IO/s 8.20 secs/100000 ios 00:12:50.176 SPDK bdev Controller (SPDK1 ) core 2: 8216.67 IO/s 12.17 secs/100000 ios 00:12:50.176 SPDK bdev Controller (SPDK1 ) core 3: 13223.00 IO/s 7.56 secs/100000 ios 00:12:50.176 ======================================================== 00:12:50.176 00:12:50.176 22:10:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:50.176 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.176 [2024-07-15 22:10:15.254582] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:50.176 Initializing NVMe Controllers 00:12:50.176 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:50.176 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:50.176 Namespace ID: 1 size: 0GB 00:12:50.176 Initialization complete. 00:12:50.176 INFO: using host memory buffer for IO 00:12:50.176 Hello world! 00:12:50.176 [2024-07-15 22:10:15.288785] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:50.176 22:10:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:50.176 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.436 [2024-07-15 22:10:15.552530] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:51.388 Initializing NVMe Controllers 00:12:51.388 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.388 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.388 Initialization complete. Launching workers. 00:12:51.388 submit (in ns) avg, min, max = 6566.2, 3911.7, 4002087.5 00:12:51.388 complete (in ns) avg, min, max = 19134.9, 2372.5, 4000335.8 00:12:51.388 00:12:51.388 Submit histogram 00:12:51.388 ================ 00:12:51.388 Range in us Cumulative Count 00:12:51.388 3.893 - 3.920: 0.3229% ( 62) 00:12:51.388 3.920 - 3.947: 3.3802% ( 587) 00:12:51.388 3.947 - 3.973: 12.6302% ( 1776) 00:12:51.388 3.973 - 4.000: 24.2552% ( 2232) 00:12:51.388 4.000 - 4.027: 34.9167% ( 2047) 00:12:51.388 4.027 - 4.053: 45.3646% ( 2006) 00:12:51.388 4.053 - 4.080: 59.4635% ( 2707) 00:12:51.388 4.080 - 4.107: 75.0729% ( 2997) 00:12:51.389 4.107 - 4.133: 87.9688% ( 2476) 00:12:51.389 4.133 - 4.160: 95.2135% ( 1391) 00:12:51.389 4.160 - 4.187: 98.1198% ( 558) 00:12:51.389 4.187 - 4.213: 99.0833% ( 185) 00:12:51.389 4.213 - 4.240: 99.3802% ( 57) 00:12:51.389 4.240 - 4.267: 99.4375% ( 11) 00:12:51.389 4.267 - 4.293: 99.4583% ( 4) 00:12:51.389 4.293 - 4.320: 99.4688% ( 2) 00:12:51.389 4.427 - 4.453: 99.4740% ( 1) 00:12:51.389 4.480 - 4.507: 99.4792% ( 1) 00:12:51.389 4.507 - 4.533: 99.4896% ( 2) 00:12:51.389 4.640 - 4.667: 99.4948% ( 1) 00:12:51.389 4.667 - 4.693: 99.5052% ( 2) 00:12:51.389 4.693 - 4.720: 99.5104% ( 1) 00:12:51.389 4.773 - 4.800: 99.5156% ( 1) 00:12:51.389 4.827 - 4.853: 99.5208% ( 1) 00:12:51.389 4.933 - 4.960: 99.5260% ( 1) 00:12:51.389 4.960 - 4.987: 99.5312% ( 1) 00:12:51.389 4.987 - 5.013: 99.5365% ( 1) 00:12:51.389 5.173 - 5.200: 99.5417% ( 1) 00:12:51.389 5.280 - 5.307: 99.5469% ( 1) 00:12:51.389 5.387 - 5.413: 99.5521% ( 1) 00:12:51.389 5.627 - 5.653: 99.5625% ( 2) 00:12:51.389 5.760 - 5.787: 99.5677% ( 1) 00:12:51.389 5.787 - 5.813: 99.5729% ( 1) 00:12:51.389 5.867 - 5.893: 99.5781% ( 1) 00:12:51.389 5.947 - 5.973: 99.5833% ( 1) 00:12:51.389 5.973 - 6.000: 99.5885% ( 1) 00:12:51.389 6.027 - 6.053: 99.5990% ( 2) 00:12:51.389 6.053 - 6.080: 99.6094% ( 2) 00:12:51.389 6.080 - 6.107: 99.6146% ( 1) 00:12:51.389 6.133 - 6.160: 99.6250% ( 2) 00:12:51.389 6.160 - 6.187: 99.6406% ( 3) 00:12:51.389 6.187 - 6.213: 99.6458% ( 1) 00:12:51.389 6.213 - 6.240: 99.6562% ( 2) 00:12:51.389 6.267 - 6.293: 99.6615% ( 1) 00:12:51.389 6.320 - 6.347: 99.6719% ( 2) 00:12:51.389 6.347 - 6.373: 99.6875% ( 3) 00:12:51.389 6.373 - 6.400: 99.6927% ( 1) 00:12:51.389 6.400 - 6.427: 99.7135% ( 4) 00:12:51.389 6.480 - 6.507: 99.7188% ( 1) 00:12:51.389 6.507 - 6.533: 99.7240% ( 1) 00:12:51.389 6.587 - 6.613: 99.7344% ( 2) 00:12:51.389 6.747 - 6.773: 99.7396% ( 1) 00:12:51.389 6.800 - 6.827: 99.7448% ( 1) 00:12:51.389 7.200 - 7.253: 99.7552% ( 2) 00:12:51.389 7.253 - 7.307: 99.7656% ( 2) 00:12:51.389 7.360 - 7.413: 99.7708% ( 1) 00:12:51.389 7.520 - 7.573: 99.7917% ( 4) 00:12:51.389 7.573 - 7.627: 99.8021% ( 2) 00:12:51.389 7.627 - 7.680: 99.8073% ( 1) 00:12:51.389 7.733 - 7.787: 99.8177% ( 2) 00:12:51.389 7.787 - 7.840: 99.8281% ( 2) 00:12:51.389 7.840 - 7.893: 99.8385% ( 2) 00:12:51.389 7.893 - 7.947: 99.8438% ( 1) 00:12:51.389 7.947 - 8.000: 99.8542% ( 2) 00:12:51.389 8.000 - 8.053: 99.8646% ( 2) 00:12:51.389 8.160 - 8.213: 99.8698% ( 1) 00:12:51.389 8.213 - 8.267: 99.8750% ( 1) 00:12:51.389 8.267 - 8.320: 99.8854% ( 2) 00:12:51.389 8.427 - 8.480: 99.8958% ( 2) 00:12:51.389 8.853 - 8.907: 99.9010% ( 1) 00:12:51.389 8.907 - 8.960: 99.9062% ( 1) 00:12:51.389 8.960 - 9.013: 99.9115% ( 1) 00:12:51.389 9.173 - 9.227: 99.9167% ( 1) 00:12:51.389 9.440 - 9.493: 99.9219% ( 1) 00:12:51.389 12.320 - 12.373: 99.9271% ( 1) 00:12:51.389 13.547 - 13.600: 99.9323% ( 1) 00:12:51.389 14.293 - 14.400: 99.9375% ( 1) 00:12:51.389 3986.773 - 4014.080: 100.0000% ( 12) 00:12:51.389 00:12:51.389 Complete histogram 00:12:51.389 ================== 00:12:51.389 Ra[2024-07-15 22:10:16.576003] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:51.389 nge in us Cumulative Count 00:12:51.389 2.360 - 2.373: 0.0052% ( 1) 00:12:51.389 2.373 - 2.387: 0.0417% ( 7) 00:12:51.389 2.387 - 2.400: 0.8958% ( 164) 00:12:51.389 2.400 - 2.413: 1.0365% ( 27) 00:12:51.389 2.413 - 2.427: 1.2083% ( 33) 00:12:51.389 2.427 - 2.440: 1.3281% ( 23) 00:12:51.389 2.440 - 2.453: 44.9062% ( 8367) 00:12:51.389 2.453 - 2.467: 56.8281% ( 2289) 00:12:51.389 2.467 - 2.480: 70.6250% ( 2649) 00:12:51.389 2.480 - 2.493: 78.8750% ( 1584) 00:12:51.389 2.493 - 2.507: 81.5365% ( 511) 00:12:51.389 2.507 - 2.520: 85.0677% ( 678) 00:12:51.389 2.520 - 2.533: 91.0781% ( 1154) 00:12:51.389 2.533 - 2.547: 94.6719% ( 690) 00:12:51.389 2.547 - 2.560: 97.0156% ( 450) 00:12:51.389 2.560 - 2.573: 98.5677% ( 298) 00:12:51.389 2.573 - 2.587: 99.1354% ( 109) 00:12:51.389 2.587 - 2.600: 99.2448% ( 21) 00:12:51.389 2.600 - 2.613: 99.2708% ( 5) 00:12:51.389 2.613 - 2.627: 99.2760% ( 1) 00:12:51.389 2.627 - 2.640: 99.2812% ( 1) 00:12:51.389 2.787 - 2.800: 99.2917% ( 2) 00:12:51.389 2.867 - 2.880: 99.2969% ( 1) 00:12:51.389 2.880 - 2.893: 99.3021% ( 1) 00:12:51.389 2.920 - 2.933: 99.3073% ( 1) 00:12:51.389 3.000 - 3.013: 99.3125% ( 1) 00:12:51.389 3.040 - 3.053: 99.3177% ( 1) 00:12:51.389 4.507 - 4.533: 99.3229% ( 1) 00:12:51.389 4.533 - 4.560: 99.3333% ( 2) 00:12:51.389 4.560 - 4.587: 99.3385% ( 1) 00:12:51.389 4.587 - 4.613: 99.3438% ( 1) 00:12:51.389 4.613 - 4.640: 99.3490% ( 1) 00:12:51.389 4.667 - 4.693: 99.3646% ( 3) 00:12:51.389 4.720 - 4.747: 99.3750% ( 2) 00:12:51.389 4.747 - 4.773: 99.3802% ( 1) 00:12:51.389 4.773 - 4.800: 99.3854% ( 1) 00:12:51.389 4.827 - 4.853: 99.3906% ( 1) 00:12:51.389 4.880 - 4.907: 99.3958% ( 1) 00:12:51.389 4.933 - 4.960: 99.4062% ( 2) 00:12:51.389 5.573 - 5.600: 99.4115% ( 1) 00:12:51.389 5.627 - 5.653: 99.4167% ( 1) 00:12:51.389 5.787 - 5.813: 99.4219% ( 1) 00:12:51.389 5.813 - 5.840: 99.4271% ( 1) 00:12:51.389 5.920 - 5.947: 99.4375% ( 2) 00:12:51.389 6.000 - 6.027: 99.4479% ( 2) 00:12:51.389 6.027 - 6.053: 99.4531% ( 1) 00:12:51.389 6.053 - 6.080: 99.4583% ( 1) 00:12:51.389 6.107 - 6.133: 99.4688% ( 2) 00:12:51.389 6.213 - 6.240: 99.4740% ( 1) 00:12:51.389 6.240 - 6.267: 99.4792% ( 1) 00:12:51.389 6.320 - 6.347: 99.4844% ( 1) 00:12:51.389 6.453 - 6.480: 99.4896% ( 1) 00:12:51.389 6.533 - 6.560: 99.4948% ( 1) 00:12:51.389 6.560 - 6.587: 99.5000% ( 1) 00:12:51.389 6.587 - 6.613: 99.5052% ( 1) 00:12:51.389 6.827 - 6.880: 99.5104% ( 1) 00:12:51.389 6.880 - 6.933: 99.5156% ( 1) 00:12:51.389 7.040 - 7.093: 99.5208% ( 1) 00:12:51.389 7.253 - 7.307: 99.5260% ( 1) 00:12:51.389 7.307 - 7.360: 99.5312% ( 1) 00:12:51.389 7.360 - 7.413: 99.5365% ( 1) 00:12:51.389 7.413 - 7.467: 99.5417% ( 1) 00:12:51.389 7.733 - 7.787: 99.5469% ( 1) 00:12:51.389 8.640 - 8.693: 99.5521% ( 1) 00:12:51.389 10.027 - 10.080: 99.5573% ( 1) 00:12:51.389 10.293 - 10.347: 99.5625% ( 1) 00:12:51.389 10.400 - 10.453: 99.5677% ( 1) 00:12:51.389 13.227 - 13.280: 99.5729% ( 1) 00:12:51.389 13.653 - 13.760: 99.5781% ( 1) 00:12:51.389 159.573 - 160.427: 99.5833% ( 1) 00:12:51.389 3986.773 - 4014.080: 100.0000% ( 80) 00:12:51.389 00:12:51.389 22:10:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:51.389 22:10:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:51.389 22:10:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:51.389 22:10:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:51.389 22:10:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:51.649 [ 00:12:51.649 { 00:12:51.649 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:51.649 "subtype": "Discovery", 00:12:51.649 "listen_addresses": [], 00:12:51.649 "allow_any_host": true, 00:12:51.649 "hosts": [] 00:12:51.649 }, 00:12:51.649 { 00:12:51.649 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:51.649 "subtype": "NVMe", 00:12:51.649 "listen_addresses": [ 00:12:51.649 { 00:12:51.649 "trtype": "VFIOUSER", 00:12:51.649 "adrfam": "IPv4", 00:12:51.649 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:51.649 "trsvcid": "0" 00:12:51.649 } 00:12:51.649 ], 00:12:51.649 "allow_any_host": true, 00:12:51.649 "hosts": [], 00:12:51.649 "serial_number": "SPDK1", 00:12:51.649 "model_number": "SPDK bdev Controller", 00:12:51.649 "max_namespaces": 32, 00:12:51.649 "min_cntlid": 1, 00:12:51.649 "max_cntlid": 65519, 00:12:51.649 "namespaces": [ 00:12:51.649 { 00:12:51.649 "nsid": 1, 00:12:51.649 "bdev_name": "Malloc1", 00:12:51.649 "name": "Malloc1", 00:12:51.649 "nguid": "197292EC4E8F425D84674808B2EC9EB9", 00:12:51.650 "uuid": "197292ec-4e8f-425d-8467-4808b2ec9eb9" 00:12:51.650 } 00:12:51.650 ] 00:12:51.650 }, 00:12:51.650 { 00:12:51.650 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:51.650 "subtype": "NVMe", 00:12:51.650 "listen_addresses": [ 00:12:51.650 { 00:12:51.650 "trtype": "VFIOUSER", 00:12:51.650 "adrfam": "IPv4", 00:12:51.650 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:51.650 "trsvcid": "0" 00:12:51.650 } 00:12:51.650 ], 00:12:51.650 "allow_any_host": true, 00:12:51.650 "hosts": [], 00:12:51.650 "serial_number": "SPDK2", 00:12:51.650 "model_number": "SPDK bdev Controller", 00:12:51.650 "max_namespaces": 32, 00:12:51.650 "min_cntlid": 1, 00:12:51.650 "max_cntlid": 65519, 00:12:51.650 "namespaces": [ 00:12:51.650 { 00:12:51.650 "nsid": 1, 00:12:51.650 "bdev_name": "Malloc2", 00:12:51.650 "name": "Malloc2", 00:12:51.650 "nguid": "AB40501464C748B0A0BCFD5C1E4562E0", 00:12:51.650 "uuid": "ab405014-64c7-48b0-a0bc-fd5c1e4562e0" 00:12:51.650 } 00:12:51.650 ] 00:12:51.650 } 00:12:51.650 ] 00:12:51.650 22:10:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:51.650 22:10:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2683231 00:12:51.650 22:10:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:51.650 22:10:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:51.650 22:10:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:51.650 22:10:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:51.650 22:10:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:51.650 22:10:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:51.650 22:10:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:51.650 22:10:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:51.650 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.650 Malloc3 00:12:51.650 [2024-07-15 22:10:16.967692] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:51.650 22:10:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:51.910 [2024-07-15 22:10:17.119697] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:51.910 22:10:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:51.910 Asynchronous Event Request test 00:12:51.910 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.910 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.910 Registering asynchronous event callbacks... 00:12:51.910 Starting namespace attribute notice tests for all controllers... 00:12:51.910 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:51.910 aer_cb - Changed Namespace 00:12:51.910 Cleaning up... 00:12:52.172 [ 00:12:52.172 { 00:12:52.172 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:52.172 "subtype": "Discovery", 00:12:52.172 "listen_addresses": [], 00:12:52.172 "allow_any_host": true, 00:12:52.172 "hosts": [] 00:12:52.172 }, 00:12:52.172 { 00:12:52.172 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:52.172 "subtype": "NVMe", 00:12:52.172 "listen_addresses": [ 00:12:52.172 { 00:12:52.172 "trtype": "VFIOUSER", 00:12:52.172 "adrfam": "IPv4", 00:12:52.172 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:52.172 "trsvcid": "0" 00:12:52.172 } 00:12:52.172 ], 00:12:52.172 "allow_any_host": true, 00:12:52.172 "hosts": [], 00:12:52.172 "serial_number": "SPDK1", 00:12:52.172 "model_number": "SPDK bdev Controller", 00:12:52.172 "max_namespaces": 32, 00:12:52.172 "min_cntlid": 1, 00:12:52.172 "max_cntlid": 65519, 00:12:52.172 "namespaces": [ 00:12:52.172 { 00:12:52.172 "nsid": 1, 00:12:52.172 "bdev_name": "Malloc1", 00:12:52.172 "name": "Malloc1", 00:12:52.172 "nguid": "197292EC4E8F425D84674808B2EC9EB9", 00:12:52.172 "uuid": "197292ec-4e8f-425d-8467-4808b2ec9eb9" 00:12:52.172 }, 00:12:52.172 { 00:12:52.172 "nsid": 2, 00:12:52.172 "bdev_name": "Malloc3", 00:12:52.172 "name": "Malloc3", 00:12:52.172 "nguid": "689A4764DC534B55AA17C7B59467E79C", 00:12:52.172 "uuid": "689a4764-dc53-4b55-aa17-c7b59467e79c" 00:12:52.172 } 00:12:52.172 ] 00:12:52.172 }, 00:12:52.172 { 00:12:52.172 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:52.172 "subtype": "NVMe", 00:12:52.172 "listen_addresses": [ 00:12:52.172 { 00:12:52.172 "trtype": "VFIOUSER", 00:12:52.172 "adrfam": "IPv4", 00:12:52.172 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:52.172 "trsvcid": "0" 00:12:52.172 } 00:12:52.172 ], 00:12:52.172 "allow_any_host": true, 00:12:52.172 "hosts": [], 00:12:52.172 "serial_number": "SPDK2", 00:12:52.172 "model_number": "SPDK bdev Controller", 00:12:52.172 "max_namespaces": 32, 00:12:52.172 "min_cntlid": 1, 00:12:52.172 "max_cntlid": 65519, 00:12:52.172 "namespaces": [ 00:12:52.172 { 00:12:52.172 "nsid": 1, 00:12:52.172 "bdev_name": "Malloc2", 00:12:52.172 "name": "Malloc2", 00:12:52.172 "nguid": "AB40501464C748B0A0BCFD5C1E4562E0", 00:12:52.173 "uuid": "ab405014-64c7-48b0-a0bc-fd5c1e4562e0" 00:12:52.173 } 00:12:52.173 ] 00:12:52.173 } 00:12:52.173 ] 00:12:52.173 22:10:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2683231 00:12:52.173 22:10:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:52.173 22:10:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:52.173 22:10:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:52.173 22:10:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:52.173 [2024-07-15 22:10:17.337082] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:12:52.173 [2024-07-15 22:10:17.337135] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683411 ] 00:12:52.173 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.173 [2024-07-15 22:10:17.369647] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:52.173 [2024-07-15 22:10:17.374316] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:52.173 [2024-07-15 22:10:17.374336] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f621e9f1000 00:12:52.173 [2024-07-15 22:10:17.375328] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:52.173 [2024-07-15 22:10:17.376323] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:52.173 [2024-07-15 22:10:17.377329] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:52.173 [2024-07-15 22:10:17.378337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:52.173 [2024-07-15 22:10:17.379344] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:52.173 [2024-07-15 22:10:17.380351] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:52.173 [2024-07-15 22:10:17.381356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:52.173 [2024-07-15 22:10:17.382361] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:52.173 [2024-07-15 22:10:17.383378] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:52.173 [2024-07-15 22:10:17.383387] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f621e9e6000 00:12:52.173 [2024-07-15 22:10:17.384711] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:52.173 [2024-07-15 22:10:17.405279] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:52.173 [2024-07-15 22:10:17.405303] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:52.173 [2024-07-15 22:10:17.407371] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:52.173 [2024-07-15 22:10:17.407414] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:52.173 [2024-07-15 22:10:17.407495] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:52.173 [2024-07-15 22:10:17.407509] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:52.173 [2024-07-15 22:10:17.407514] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:52.173 [2024-07-15 22:10:17.408380] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:52.173 [2024-07-15 22:10:17.408389] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:52.173 [2024-07-15 22:10:17.408396] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:52.173 [2024-07-15 22:10:17.409382] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:52.173 [2024-07-15 22:10:17.409396] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:52.173 [2024-07-15 22:10:17.409404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:52.173 [2024-07-15 22:10:17.410389] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:52.173 [2024-07-15 22:10:17.410398] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:52.173 [2024-07-15 22:10:17.411391] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:52.173 [2024-07-15 22:10:17.411400] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:52.173 [2024-07-15 22:10:17.411405] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:52.173 [2024-07-15 22:10:17.411411] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:52.173 [2024-07-15 22:10:17.411517] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:52.173 [2024-07-15 22:10:17.411521] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:52.173 [2024-07-15 22:10:17.411526] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:52.173 [2024-07-15 22:10:17.412400] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:52.173 [2024-07-15 22:10:17.413404] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:52.173 [2024-07-15 22:10:17.414415] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:52.173 [2024-07-15 22:10:17.415415] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:52.173 [2024-07-15 22:10:17.415454] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:52.173 [2024-07-15 22:10:17.416425] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:52.173 [2024-07-15 22:10:17.416434] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:52.173 [2024-07-15 22:10:17.416438] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:52.173 [2024-07-15 22:10:17.416460] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:52.173 [2024-07-15 22:10:17.416467] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:52.173 [2024-07-15 22:10:17.416479] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:52.173 [2024-07-15 22:10:17.416484] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:52.173 [2024-07-15 22:10:17.416495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:52.173 [2024-07-15 22:10:17.423130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:52.173 [2024-07-15 22:10:17.423144] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:52.173 [2024-07-15 22:10:17.423151] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:52.173 [2024-07-15 22:10:17.423155] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:52.173 [2024-07-15 22:10:17.423160] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:52.173 [2024-07-15 22:10:17.423164] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:52.173 [2024-07-15 22:10:17.423169] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:52.173 [2024-07-15 22:10:17.423174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:52.173 [2024-07-15 22:10:17.423181] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:52.173 [2024-07-15 22:10:17.423191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:52.173 [2024-07-15 22:10:17.431130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:52.173 [2024-07-15 22:10:17.431144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.173 [2024-07-15 22:10:17.431153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.173 [2024-07-15 22:10:17.431161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.173 [2024-07-15 22:10:17.431169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.173 [2024-07-15 22:10:17.431174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:52.173 [2024-07-15 22:10:17.431182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:52.173 [2024-07-15 22:10:17.431191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:52.173 [2024-07-15 22:10:17.439127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:52.173 [2024-07-15 22:10:17.439134] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:52.173 [2024-07-15 22:10:17.439139] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:52.173 [2024-07-15 22:10:17.439146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:52.173 [2024-07-15 22:10:17.439151] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:52.173 [2024-07-15 22:10:17.439160] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:52.173 [2024-07-15 22:10:17.447135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:52.173 [2024-07-15 22:10:17.447201] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:52.174 [2024-07-15 22:10:17.447209] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:52.174 [2024-07-15 22:10:17.447216] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:52.174 [2024-07-15 22:10:17.447220] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:52.174 [2024-07-15 22:10:17.447227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:52.174 [2024-07-15 22:10:17.455128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:52.174 [2024-07-15 22:10:17.455139] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:52.174 [2024-07-15 22:10:17.455151] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:52.174 [2024-07-15 22:10:17.455158] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:52.174 [2024-07-15 22:10:17.455165] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:52.174 [2024-07-15 22:10:17.455169] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:52.174 [2024-07-15 22:10:17.455175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:52.174 [2024-07-15 22:10:17.463127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:52.174 [2024-07-15 22:10:17.463140] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:52.174 [2024-07-15 22:10:17.463148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:52.174 [2024-07-15 22:10:17.463155] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:52.174 [2024-07-15 22:10:17.463159] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:52.174 [2024-07-15 22:10:17.463166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:52.174 [2024-07-15 22:10:17.471126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:52.174 [2024-07-15 22:10:17.471135] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:52.174 [2024-07-15 22:10:17.471142] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:52.174 [2024-07-15 22:10:17.471150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:52.174 [2024-07-15 22:10:17.471155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:52.174 [2024-07-15 22:10:17.471160] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:52.174 [2024-07-15 22:10:17.471165] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:52.174 [2024-07-15 22:10:17.471170] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:52.174 [2024-07-15 22:10:17.471177] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:52.174 [2024-07-15 22:10:17.471182] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:52.174 [2024-07-15 22:10:17.471197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:52.174 [2024-07-15 22:10:17.479234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:52.174 [2024-07-15 22:10:17.479249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:52.174 [2024-07-15 22:10:17.487128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:52.174 [2024-07-15 22:10:17.487141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:52.174 [2024-07-15 22:10:17.495129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:52.174 [2024-07-15 22:10:17.495142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:52.435 [2024-07-15 22:10:17.503129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:52.435 [2024-07-15 22:10:17.503147] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:52.435 [2024-07-15 22:10:17.503152] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:52.435 [2024-07-15 22:10:17.503155] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:52.435 [2024-07-15 22:10:17.503159] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:52.435 [2024-07-15 22:10:17.503165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:52.435 [2024-07-15 22:10:17.503173] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:52.435 [2024-07-15 22:10:17.503177] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:52.435 [2024-07-15 22:10:17.503183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:52.435 [2024-07-15 22:10:17.503190] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:52.435 [2024-07-15 22:10:17.503194] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:52.435 [2024-07-15 22:10:17.503200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:52.435 [2024-07-15 22:10:17.503208] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:52.435 [2024-07-15 22:10:17.503212] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:52.435 [2024-07-15 22:10:17.503217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:52.435 [2024-07-15 22:10:17.511128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:52.435 [2024-07-15 22:10:17.511144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:52.435 [2024-07-15 22:10:17.511155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:52.435 [2024-07-15 22:10:17.511164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:52.435 ===================================================== 00:12:52.435 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:52.435 ===================================================== 00:12:52.435 Controller Capabilities/Features 00:12:52.435 ================================ 00:12:52.435 Vendor ID: 4e58 00:12:52.435 Subsystem Vendor ID: 4e58 00:12:52.435 Serial Number: SPDK2 00:12:52.435 Model Number: SPDK bdev Controller 00:12:52.435 Firmware Version: 24.09 00:12:52.435 Recommended Arb Burst: 6 00:12:52.435 IEEE OUI Identifier: 8d 6b 50 00:12:52.435 Multi-path I/O 00:12:52.435 May have multiple subsystem ports: Yes 00:12:52.435 May have multiple controllers: Yes 00:12:52.435 Associated with SR-IOV VF: No 00:12:52.435 Max Data Transfer Size: 131072 00:12:52.435 Max Number of Namespaces: 32 00:12:52.435 Max Number of I/O Queues: 127 00:12:52.435 NVMe Specification Version (VS): 1.3 00:12:52.435 NVMe Specification Version (Identify): 1.3 00:12:52.435 Maximum Queue Entries: 256 00:12:52.435 Contiguous Queues Required: Yes 00:12:52.435 Arbitration Mechanisms Supported 00:12:52.435 Weighted Round Robin: Not Supported 00:12:52.435 Vendor Specific: Not Supported 00:12:52.435 Reset Timeout: 15000 ms 00:12:52.435 Doorbell Stride: 4 bytes 00:12:52.435 NVM Subsystem Reset: Not Supported 00:12:52.435 Command Sets Supported 00:12:52.435 NVM Command Set: Supported 00:12:52.435 Boot Partition: Not Supported 00:12:52.435 Memory Page Size Minimum: 4096 bytes 00:12:52.435 Memory Page Size Maximum: 4096 bytes 00:12:52.436 Persistent Memory Region: Not Supported 00:12:52.436 Optional Asynchronous Events Supported 00:12:52.436 Namespace Attribute Notices: Supported 00:12:52.436 Firmware Activation Notices: Not Supported 00:12:52.436 ANA Change Notices: Not Supported 00:12:52.436 PLE Aggregate Log Change Notices: Not Supported 00:12:52.436 LBA Status Info Alert Notices: Not Supported 00:12:52.436 EGE Aggregate Log Change Notices: Not Supported 00:12:52.436 Normal NVM Subsystem Shutdown event: Not Supported 00:12:52.436 Zone Descriptor Change Notices: Not Supported 00:12:52.436 Discovery Log Change Notices: Not Supported 00:12:52.436 Controller Attributes 00:12:52.436 128-bit Host Identifier: Supported 00:12:52.436 Non-Operational Permissive Mode: Not Supported 00:12:52.436 NVM Sets: Not Supported 00:12:52.436 Read Recovery Levels: Not Supported 00:12:52.436 Endurance Groups: Not Supported 00:12:52.436 Predictable Latency Mode: Not Supported 00:12:52.436 Traffic Based Keep ALive: Not Supported 00:12:52.436 Namespace Granularity: Not Supported 00:12:52.436 SQ Associations: Not Supported 00:12:52.436 UUID List: Not Supported 00:12:52.436 Multi-Domain Subsystem: Not Supported 00:12:52.436 Fixed Capacity Management: Not Supported 00:12:52.436 Variable Capacity Management: Not Supported 00:12:52.436 Delete Endurance Group: Not Supported 00:12:52.436 Delete NVM Set: Not Supported 00:12:52.436 Extended LBA Formats Supported: Not Supported 00:12:52.436 Flexible Data Placement Supported: Not Supported 00:12:52.436 00:12:52.436 Controller Memory Buffer Support 00:12:52.436 ================================ 00:12:52.436 Supported: No 00:12:52.436 00:12:52.436 Persistent Memory Region Support 00:12:52.436 ================================ 00:12:52.436 Supported: No 00:12:52.436 00:12:52.436 Admin Command Set Attributes 00:12:52.436 ============================ 00:12:52.436 Security Send/Receive: Not Supported 00:12:52.436 Format NVM: Not Supported 00:12:52.436 Firmware Activate/Download: Not Supported 00:12:52.436 Namespace Management: Not Supported 00:12:52.436 Device Self-Test: Not Supported 00:12:52.436 Directives: Not Supported 00:12:52.436 NVMe-MI: Not Supported 00:12:52.436 Virtualization Management: Not Supported 00:12:52.436 Doorbell Buffer Config: Not Supported 00:12:52.436 Get LBA Status Capability: Not Supported 00:12:52.436 Command & Feature Lockdown Capability: Not Supported 00:12:52.436 Abort Command Limit: 4 00:12:52.436 Async Event Request Limit: 4 00:12:52.436 Number of Firmware Slots: N/A 00:12:52.436 Firmware Slot 1 Read-Only: N/A 00:12:52.436 Firmware Activation Without Reset: N/A 00:12:52.436 Multiple Update Detection Support: N/A 00:12:52.436 Firmware Update Granularity: No Information Provided 00:12:52.436 Per-Namespace SMART Log: No 00:12:52.436 Asymmetric Namespace Access Log Page: Not Supported 00:12:52.436 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:52.436 Command Effects Log Page: Supported 00:12:52.436 Get Log Page Extended Data: Supported 00:12:52.436 Telemetry Log Pages: Not Supported 00:12:52.436 Persistent Event Log Pages: Not Supported 00:12:52.436 Supported Log Pages Log Page: May Support 00:12:52.436 Commands Supported & Effects Log Page: Not Supported 00:12:52.436 Feature Identifiers & Effects Log Page:May Support 00:12:52.436 NVMe-MI Commands & Effects Log Page: May Support 00:12:52.436 Data Area 4 for Telemetry Log: Not Supported 00:12:52.436 Error Log Page Entries Supported: 128 00:12:52.436 Keep Alive: Supported 00:12:52.436 Keep Alive Granularity: 10000 ms 00:12:52.436 00:12:52.436 NVM Command Set Attributes 00:12:52.436 ========================== 00:12:52.436 Submission Queue Entry Size 00:12:52.436 Max: 64 00:12:52.436 Min: 64 00:12:52.436 Completion Queue Entry Size 00:12:52.436 Max: 16 00:12:52.436 Min: 16 00:12:52.436 Number of Namespaces: 32 00:12:52.436 Compare Command: Supported 00:12:52.436 Write Uncorrectable Command: Not Supported 00:12:52.436 Dataset Management Command: Supported 00:12:52.436 Write Zeroes Command: Supported 00:12:52.436 Set Features Save Field: Not Supported 00:12:52.436 Reservations: Not Supported 00:12:52.436 Timestamp: Not Supported 00:12:52.436 Copy: Supported 00:12:52.436 Volatile Write Cache: Present 00:12:52.436 Atomic Write Unit (Normal): 1 00:12:52.436 Atomic Write Unit (PFail): 1 00:12:52.436 Atomic Compare & Write Unit: 1 00:12:52.436 Fused Compare & Write: Supported 00:12:52.436 Scatter-Gather List 00:12:52.436 SGL Command Set: Supported (Dword aligned) 00:12:52.436 SGL Keyed: Not Supported 00:12:52.436 SGL Bit Bucket Descriptor: Not Supported 00:12:52.436 SGL Metadata Pointer: Not Supported 00:12:52.436 Oversized SGL: Not Supported 00:12:52.436 SGL Metadata Address: Not Supported 00:12:52.436 SGL Offset: Not Supported 00:12:52.436 Transport SGL Data Block: Not Supported 00:12:52.436 Replay Protected Memory Block: Not Supported 00:12:52.436 00:12:52.436 Firmware Slot Information 00:12:52.436 ========================= 00:12:52.436 Active slot: 1 00:12:52.436 Slot 1 Firmware Revision: 24.09 00:12:52.436 00:12:52.436 00:12:52.436 Commands Supported and Effects 00:12:52.436 ============================== 00:12:52.436 Admin Commands 00:12:52.436 -------------- 00:12:52.436 Get Log Page (02h): Supported 00:12:52.436 Identify (06h): Supported 00:12:52.436 Abort (08h): Supported 00:12:52.436 Set Features (09h): Supported 00:12:52.436 Get Features (0Ah): Supported 00:12:52.436 Asynchronous Event Request (0Ch): Supported 00:12:52.436 Keep Alive (18h): Supported 00:12:52.436 I/O Commands 00:12:52.436 ------------ 00:12:52.436 Flush (00h): Supported LBA-Change 00:12:52.436 Write (01h): Supported LBA-Change 00:12:52.436 Read (02h): Supported 00:12:52.436 Compare (05h): Supported 00:12:52.436 Write Zeroes (08h): Supported LBA-Change 00:12:52.436 Dataset Management (09h): Supported LBA-Change 00:12:52.436 Copy (19h): Supported LBA-Change 00:12:52.436 00:12:52.436 Error Log 00:12:52.436 ========= 00:12:52.436 00:12:52.436 Arbitration 00:12:52.436 =========== 00:12:52.436 Arbitration Burst: 1 00:12:52.436 00:12:52.436 Power Management 00:12:52.436 ================ 00:12:52.436 Number of Power States: 1 00:12:52.436 Current Power State: Power State #0 00:12:52.436 Power State #0: 00:12:52.436 Max Power: 0.00 W 00:12:52.436 Non-Operational State: Operational 00:12:52.436 Entry Latency: Not Reported 00:12:52.436 Exit Latency: Not Reported 00:12:52.436 Relative Read Throughput: 0 00:12:52.436 Relative Read Latency: 0 00:12:52.436 Relative Write Throughput: 0 00:12:52.436 Relative Write Latency: 0 00:12:52.436 Idle Power: Not Reported 00:12:52.436 Active Power: Not Reported 00:12:52.436 Non-Operational Permissive Mode: Not Supported 00:12:52.436 00:12:52.436 Health Information 00:12:52.436 ================== 00:12:52.436 Critical Warnings: 00:12:52.436 Available Spare Space: OK 00:12:52.436 Temperature: OK 00:12:52.436 Device Reliability: OK 00:12:52.436 Read Only: No 00:12:52.436 Volatile Memory Backup: OK 00:12:52.436 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:52.436 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:52.436 Available Spare: 0% 00:12:52.436 Available Sp[2024-07-15 22:10:17.511266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:52.436 [2024-07-15 22:10:17.519129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:52.436 [2024-07-15 22:10:17.519163] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:52.436 [2024-07-15 22:10:17.519172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.436 [2024-07-15 22:10:17.519179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.436 [2024-07-15 22:10:17.519185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.436 [2024-07-15 22:10:17.519191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.436 [2024-07-15 22:10:17.519243] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:52.436 [2024-07-15 22:10:17.519254] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:52.436 [2024-07-15 22:10:17.520248] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:52.436 [2024-07-15 22:10:17.520296] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:52.436 [2024-07-15 22:10:17.520302] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:52.436 [2024-07-15 22:10:17.521249] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:52.436 [2024-07-15 22:10:17.521261] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:52.436 [2024-07-15 22:10:17.521310] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:52.436 [2024-07-15 22:10:17.524128] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:52.436 are Threshold: 0% 00:12:52.436 Life Percentage Used: 0% 00:12:52.436 Data Units Read: 0 00:12:52.436 Data Units Written: 0 00:12:52.437 Host Read Commands: 0 00:12:52.437 Host Write Commands: 0 00:12:52.437 Controller Busy Time: 0 minutes 00:12:52.437 Power Cycles: 0 00:12:52.437 Power On Hours: 0 hours 00:12:52.437 Unsafe Shutdowns: 0 00:12:52.437 Unrecoverable Media Errors: 0 00:12:52.437 Lifetime Error Log Entries: 0 00:12:52.437 Warning Temperature Time: 0 minutes 00:12:52.437 Critical Temperature Time: 0 minutes 00:12:52.437 00:12:52.437 Number of Queues 00:12:52.437 ================ 00:12:52.437 Number of I/O Submission Queues: 127 00:12:52.437 Number of I/O Completion Queues: 127 00:12:52.437 00:12:52.437 Active Namespaces 00:12:52.437 ================= 00:12:52.437 Namespace ID:1 00:12:52.437 Error Recovery Timeout: Unlimited 00:12:52.437 Command Set Identifier: NVM (00h) 00:12:52.437 Deallocate: Supported 00:12:52.437 Deallocated/Unwritten Error: Not Supported 00:12:52.437 Deallocated Read Value: Unknown 00:12:52.437 Deallocate in Write Zeroes: Not Supported 00:12:52.437 Deallocated Guard Field: 0xFFFF 00:12:52.437 Flush: Supported 00:12:52.437 Reservation: Supported 00:12:52.437 Namespace Sharing Capabilities: Multiple Controllers 00:12:52.437 Size (in LBAs): 131072 (0GiB) 00:12:52.437 Capacity (in LBAs): 131072 (0GiB) 00:12:52.437 Utilization (in LBAs): 131072 (0GiB) 00:12:52.437 NGUID: AB40501464C748B0A0BCFD5C1E4562E0 00:12:52.437 UUID: ab405014-64c7-48b0-a0bc-fd5c1e4562e0 00:12:52.437 Thin Provisioning: Not Supported 00:12:52.437 Per-NS Atomic Units: Yes 00:12:52.437 Atomic Boundary Size (Normal): 0 00:12:52.437 Atomic Boundary Size (PFail): 0 00:12:52.437 Atomic Boundary Offset: 0 00:12:52.437 Maximum Single Source Range Length: 65535 00:12:52.437 Maximum Copy Length: 65535 00:12:52.437 Maximum Source Range Count: 1 00:12:52.437 NGUID/EUI64 Never Reused: No 00:12:52.437 Namespace Write Protected: No 00:12:52.437 Number of LBA Formats: 1 00:12:52.437 Current LBA Format: LBA Format #00 00:12:52.437 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:52.437 00:12:52.437 22:10:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:52.437 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.437 [2024-07-15 22:10:17.708168] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:57.722 Initializing NVMe Controllers 00:12:57.722 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:57.722 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:57.722 Initialization complete. Launching workers. 00:12:57.722 ======================================================== 00:12:57.722 Latency(us) 00:12:57.722 Device Information : IOPS MiB/s Average min max 00:12:57.722 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39962.44 156.10 3202.87 838.31 6812.76 00:12:57.722 ======================================================== 00:12:57.722 Total : 39962.44 156.10 3202.87 838.31 6812.76 00:12:57.722 00:12:57.722 [2024-07-15 22:10:22.813310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:57.722 22:10:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:57.722 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.722 [2024-07-15 22:10:22.996870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:03.041 Initializing NVMe Controllers 00:13:03.041 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:03.041 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:03.041 Initialization complete. Launching workers. 00:13:03.041 ======================================================== 00:13:03.041 Latency(us) 00:13:03.041 Device Information : IOPS MiB/s Average min max 00:13:03.041 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35182.76 137.43 3637.94 1110.58 7260.81 00:13:03.042 ======================================================== 00:13:03.042 Total : 35182.76 137.43 3637.94 1110.58 7260.81 00:13:03.042 00:13:03.042 [2024-07-15 22:10:28.018043] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:03.042 22:10:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:03.042 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.042 [2024-07-15 22:10:28.201496] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:08.329 [2024-07-15 22:10:33.339204] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:08.329 Initializing NVMe Controllers 00:13:08.329 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:08.329 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:08.329 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:08.329 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:08.329 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:08.329 Initialization complete. Launching workers. 00:13:08.329 Starting thread on core 2 00:13:08.329 Starting thread on core 3 00:13:08.330 Starting thread on core 1 00:13:08.330 22:10:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:08.330 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.330 [2024-07-15 22:10:33.601617] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:11.632 [2024-07-15 22:10:36.662407] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:11.632 Initializing NVMe Controllers 00:13:11.632 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.632 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.632 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:11.632 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:11.632 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:11.632 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:11.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:11.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:11.632 Initialization complete. Launching workers. 00:13:11.632 Starting thread on core 1 with urgent priority queue 00:13:11.632 Starting thread on core 2 with urgent priority queue 00:13:11.632 Starting thread on core 3 with urgent priority queue 00:13:11.632 Starting thread on core 0 with urgent priority queue 00:13:11.632 SPDK bdev Controller (SPDK2 ) core 0: 11379.33 IO/s 8.79 secs/100000 ios 00:13:11.632 SPDK bdev Controller (SPDK2 ) core 1: 8180.67 IO/s 12.22 secs/100000 ios 00:13:11.632 SPDK bdev Controller (SPDK2 ) core 2: 16451.67 IO/s 6.08 secs/100000 ios 00:13:11.632 SPDK bdev Controller (SPDK2 ) core 3: 10805.33 IO/s 9.25 secs/100000 ios 00:13:11.632 ======================================================== 00:13:11.632 00:13:11.632 22:10:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:11.632 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.632 [2024-07-15 22:10:36.927612] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:11.632 Initializing NVMe Controllers 00:13:11.632 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.632 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.632 Namespace ID: 1 size: 0GB 00:13:11.632 Initialization complete. 00:13:11.632 INFO: using host memory buffer for IO 00:13:11.632 Hello world! 00:13:11.632 [2024-07-15 22:10:36.937674] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:11.893 22:10:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:11.893 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.893 [2024-07-15 22:10:37.196385] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:13.279 Initializing NVMe Controllers 00:13:13.279 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.279 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.279 Initialization complete. Launching workers. 00:13:13.279 submit (in ns) avg, min, max = 7372.3, 3901.7, 4001300.8 00:13:13.279 complete (in ns) avg, min, max = 20669.1, 2388.3, 3999511.7 00:13:13.279 00:13:13.279 Submit histogram 00:13:13.279 ================ 00:13:13.279 Range in us Cumulative Count 00:13:13.279 3.893 - 3.920: 0.9671% ( 187) 00:13:13.279 3.920 - 3.947: 6.9818% ( 1163) 00:13:13.279 3.947 - 3.973: 15.3444% ( 1617) 00:13:13.279 3.973 - 4.000: 26.4843% ( 2154) 00:13:13.279 4.000 - 4.027: 36.1450% ( 1868) 00:13:13.279 4.027 - 4.053: 47.8951% ( 2272) 00:13:13.280 4.053 - 4.080: 62.8000% ( 2882) 00:13:13.280 4.080 - 4.107: 78.1444% ( 2967) 00:13:13.280 4.107 - 4.133: 90.0548% ( 2303) 00:13:13.280 4.133 - 4.160: 96.2247% ( 1193) 00:13:13.280 4.160 - 4.187: 98.4899% ( 438) 00:13:13.280 4.187 - 4.213: 99.2036% ( 138) 00:13:13.280 4.213 - 4.240: 99.3380% ( 26) 00:13:13.280 4.240 - 4.267: 99.4053% ( 13) 00:13:13.280 4.267 - 4.293: 99.4466% ( 8) 00:13:13.280 4.293 - 4.320: 99.4570% ( 2) 00:13:13.280 4.320 - 4.347: 99.4673% ( 2) 00:13:13.280 4.347 - 4.373: 99.4828% ( 3) 00:13:13.280 4.400 - 4.427: 99.4880% ( 1) 00:13:13.280 4.480 - 4.507: 99.4932% ( 1) 00:13:13.280 4.507 - 4.533: 99.4983% ( 1) 00:13:13.280 4.987 - 5.013: 99.5035% ( 1) 00:13:13.280 5.013 - 5.040: 99.5087% ( 1) 00:13:13.280 5.253 - 5.280: 99.5139% ( 1) 00:13:13.280 5.360 - 5.387: 99.5190% ( 1) 00:13:13.280 5.787 - 5.813: 99.5242% ( 1) 00:13:13.280 5.920 - 5.947: 99.5345% ( 2) 00:13:13.280 6.053 - 6.080: 99.5449% ( 2) 00:13:13.280 6.107 - 6.133: 99.5501% ( 1) 00:13:13.280 6.133 - 6.160: 99.5604% ( 2) 00:13:13.280 6.160 - 6.187: 99.5707% ( 2) 00:13:13.280 6.187 - 6.213: 99.5759% ( 1) 00:13:13.280 6.213 - 6.240: 99.5914% ( 3) 00:13:13.280 6.240 - 6.267: 99.6121% ( 4) 00:13:13.280 6.267 - 6.293: 99.6225% ( 2) 00:13:13.280 6.293 - 6.320: 99.6276% ( 1) 00:13:13.280 6.320 - 6.347: 99.6432% ( 3) 00:13:13.280 6.427 - 6.453: 99.6535% ( 2) 00:13:13.280 6.453 - 6.480: 99.6587% ( 1) 00:13:13.280 7.200 - 7.253: 99.6690% ( 2) 00:13:13.280 7.253 - 7.307: 99.6794% ( 2) 00:13:13.280 7.360 - 7.413: 99.6845% ( 1) 00:13:13.280 7.413 - 7.467: 99.6897% ( 1) 00:13:13.280 7.520 - 7.573: 99.7000% ( 2) 00:13:13.280 7.627 - 7.680: 99.7156% ( 3) 00:13:13.280 7.680 - 7.733: 99.7259% ( 2) 00:13:13.280 7.733 - 7.787: 99.7414% ( 3) 00:13:13.280 7.840 - 7.893: 99.7518% ( 2) 00:13:13.280 7.893 - 7.947: 99.7673% ( 3) 00:13:13.280 7.947 - 8.000: 99.7724% ( 1) 00:13:13.280 8.000 - 8.053: 99.7828% ( 2) 00:13:13.280 8.053 - 8.107: 99.7880% ( 1) 00:13:13.280 8.107 - 8.160: 99.7931% ( 1) 00:13:13.280 8.160 - 8.213: 99.8086% ( 3) 00:13:13.280 8.213 - 8.267: 99.8138% ( 1) 00:13:13.280 8.320 - 8.373: 99.8242% ( 2) 00:13:13.280 8.373 - 8.427: 99.8345% ( 2) 00:13:13.280 8.533 - 8.587: 99.8500% ( 3) 00:13:13.280 8.587 - 8.640: 99.8552% ( 1) 00:13:13.280 8.640 - 8.693: 99.8655% ( 2) 00:13:13.280 8.800 - 8.853: 99.8707% ( 1) 00:13:13.280 8.853 - 8.907: 99.8759% ( 1) 00:13:13.280 9.013 - 9.067: 99.8811% ( 1) 00:13:13.280 9.067 - 9.120: 99.8862% ( 1) 00:13:13.280 9.387 - 9.440: 99.8914% ( 1) 00:13:13.280 10.613 - 10.667: 99.8966% ( 1) 00:13:13.280 12.000 - 12.053: 99.9017% ( 1) 00:13:13.280 12.373 - 12.427: 99.9069% ( 1) 00:13:13.280 16.747 - 16.853: 99.9121% ( 1) 00:13:13.280 35.200 - 35.413: 99.9173% ( 1) 00:13:13.280 3986.773 - 4014.080: 100.0000% ( 16) 00:13:13.280 00:13:13.280 Complete histogram 00:13:13.280 ================== 00:13:13.280 Range in us Cumulative Count 00:13:13.280 2.387 - 2.400: 0.6671% ( 129) 00:13:13.280 2.400 - 2.413: 0.9826% ( 61) 00:13:13.280 2.413 - 2.427: 1.1326% ( 29) 00:13:13.280 2.427 - 2.440: 1.1895% ( 11) 00:13:13.280 2.440 - 2.453: 39.3877% ( 7386) 00:13:13.280 2.453 - 2.467: 56.7853% ( 3364) 00:13:13.280 2.467 - [2024-07-15 22:10:38.291811] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:13.280 2.480: 69.0060% ( 2363) 00:13:13.280 2.480 - 2.493: 77.9272% ( 1725) 00:13:13.280 2.493 - 2.507: 81.1388% ( 621) 00:13:13.280 2.507 - 2.520: 83.5747% ( 471) 00:13:13.280 2.520 - 2.533: 88.7257% ( 996) 00:13:13.280 2.533 - 2.547: 94.0267% ( 1025) 00:13:13.280 2.547 - 2.560: 96.3126% ( 442) 00:13:13.280 2.560 - 2.573: 98.0761% ( 341) 00:13:13.280 2.573 - 2.587: 99.0122% ( 181) 00:13:13.280 2.587 - 2.600: 99.2242% ( 41) 00:13:13.280 2.600 - 2.613: 99.2553% ( 6) 00:13:13.280 2.613 - 2.627: 99.2604% ( 1) 00:13:13.280 4.453 - 4.480: 99.2656% ( 1) 00:13:13.280 4.533 - 4.560: 99.2708% ( 1) 00:13:13.280 4.587 - 4.613: 99.2811% ( 2) 00:13:13.280 4.613 - 4.640: 99.2863% ( 1) 00:13:13.280 4.640 - 4.667: 99.2915% ( 1) 00:13:13.280 4.720 - 4.747: 99.2966% ( 1) 00:13:13.280 4.800 - 4.827: 99.3070% ( 2) 00:13:13.280 4.880 - 4.907: 99.3173% ( 2) 00:13:13.280 4.960 - 4.987: 99.3225% ( 1) 00:13:13.280 5.040 - 5.067: 99.3277% ( 1) 00:13:13.280 5.227 - 5.253: 99.3329% ( 1) 00:13:13.280 5.307 - 5.333: 99.3380% ( 1) 00:13:13.280 5.333 - 5.360: 99.3432% ( 1) 00:13:13.280 5.627 - 5.653: 99.3587% ( 3) 00:13:13.280 5.707 - 5.733: 99.3639% ( 1) 00:13:13.280 5.733 - 5.760: 99.3742% ( 2) 00:13:13.280 5.760 - 5.787: 99.3794% ( 1) 00:13:13.280 5.813 - 5.840: 99.3846% ( 1) 00:13:13.280 5.947 - 5.973: 99.3949% ( 2) 00:13:13.280 5.973 - 6.000: 99.4001% ( 1) 00:13:13.280 6.000 - 6.027: 99.4053% ( 1) 00:13:13.280 6.027 - 6.053: 99.4156% ( 2) 00:13:13.280 6.080 - 6.107: 99.4208% ( 1) 00:13:13.280 6.107 - 6.133: 99.4259% ( 1) 00:13:13.280 6.133 - 6.160: 99.4415% ( 3) 00:13:13.280 6.160 - 6.187: 99.4466% ( 1) 00:13:13.280 6.187 - 6.213: 99.4518% ( 1) 00:13:13.280 6.213 - 6.240: 99.4570% ( 1) 00:13:13.280 6.240 - 6.267: 99.4621% ( 1) 00:13:13.280 6.267 - 6.293: 99.4673% ( 1) 00:13:13.280 6.320 - 6.347: 99.4725% ( 1) 00:13:13.280 6.373 - 6.400: 99.4777% ( 1) 00:13:13.280 6.427 - 6.453: 99.4828% ( 1) 00:13:13.280 6.613 - 6.640: 99.4880% ( 1) 00:13:13.280 6.667 - 6.693: 99.4932% ( 1) 00:13:13.280 6.693 - 6.720: 99.4983% ( 1) 00:13:13.280 6.720 - 6.747: 99.5035% ( 1) 00:13:13.280 6.747 - 6.773: 99.5087% ( 1) 00:13:13.280 6.933 - 6.987: 99.5139% ( 1) 00:13:13.280 6.987 - 7.040: 99.5242% ( 2) 00:13:13.280 7.200 - 7.253: 99.5294% ( 1) 00:13:13.280 7.520 - 7.573: 99.5345% ( 1) 00:13:13.280 12.747 - 12.800: 99.5397% ( 1) 00:13:13.280 31.573 - 31.787: 99.5449% ( 1) 00:13:13.280 3986.773 - 4014.080: 100.0000% ( 88) 00:13:13.280 00:13:13.280 22:10:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:13.280 22:10:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:13.280 22:10:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:13.280 22:10:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:13.280 22:10:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:13.280 [ 00:13:13.280 { 00:13:13.280 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:13.280 "subtype": "Discovery", 00:13:13.280 "listen_addresses": [], 00:13:13.280 "allow_any_host": true, 00:13:13.280 "hosts": [] 00:13:13.280 }, 00:13:13.281 { 00:13:13.281 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:13.281 "subtype": "NVMe", 00:13:13.281 "listen_addresses": [ 00:13:13.281 { 00:13:13.281 "trtype": "VFIOUSER", 00:13:13.281 "adrfam": "IPv4", 00:13:13.281 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:13.281 "trsvcid": "0" 00:13:13.281 } 00:13:13.281 ], 00:13:13.281 "allow_any_host": true, 00:13:13.281 "hosts": [], 00:13:13.281 "serial_number": "SPDK1", 00:13:13.281 "model_number": "SPDK bdev Controller", 00:13:13.281 "max_namespaces": 32, 00:13:13.281 "min_cntlid": 1, 00:13:13.281 "max_cntlid": 65519, 00:13:13.281 "namespaces": [ 00:13:13.281 { 00:13:13.281 "nsid": 1, 00:13:13.281 "bdev_name": "Malloc1", 00:13:13.281 "name": "Malloc1", 00:13:13.281 "nguid": "197292EC4E8F425D84674808B2EC9EB9", 00:13:13.281 "uuid": "197292ec-4e8f-425d-8467-4808b2ec9eb9" 00:13:13.281 }, 00:13:13.281 { 00:13:13.281 "nsid": 2, 00:13:13.281 "bdev_name": "Malloc3", 00:13:13.281 "name": "Malloc3", 00:13:13.281 "nguid": "689A4764DC534B55AA17C7B59467E79C", 00:13:13.281 "uuid": "689a4764-dc53-4b55-aa17-c7b59467e79c" 00:13:13.281 } 00:13:13.281 ] 00:13:13.281 }, 00:13:13.281 { 00:13:13.281 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:13.281 "subtype": "NVMe", 00:13:13.281 "listen_addresses": [ 00:13:13.281 { 00:13:13.281 "trtype": "VFIOUSER", 00:13:13.281 "adrfam": "IPv4", 00:13:13.281 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:13.281 "trsvcid": "0" 00:13:13.281 } 00:13:13.281 ], 00:13:13.281 "allow_any_host": true, 00:13:13.281 "hosts": [], 00:13:13.281 "serial_number": "SPDK2", 00:13:13.281 "model_number": "SPDK bdev Controller", 00:13:13.281 "max_namespaces": 32, 00:13:13.281 "min_cntlid": 1, 00:13:13.281 "max_cntlid": 65519, 00:13:13.281 "namespaces": [ 00:13:13.281 { 00:13:13.281 "nsid": 1, 00:13:13.281 "bdev_name": "Malloc2", 00:13:13.281 "name": "Malloc2", 00:13:13.281 "nguid": "AB40501464C748B0A0BCFD5C1E4562E0", 00:13:13.281 "uuid": "ab405014-64c7-48b0-a0bc-fd5c1e4562e0" 00:13:13.281 } 00:13:13.281 ] 00:13:13.281 } 00:13:13.281 ] 00:13:13.281 22:10:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:13.281 22:10:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2687453 00:13:13.281 22:10:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:13.281 22:10:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:13.281 22:10:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:13.281 22:10:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:13.281 22:10:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:13.281 22:10:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:13.281 22:10:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:13.281 22:10:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:13.281 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.542 [2024-07-15 22:10:38.666961] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:13.542 Malloc4 00:13:13.542 22:10:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:13.542 [2024-07-15 22:10:38.829969] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:13.542 22:10:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:13.803 Asynchronous Event Request test 00:13:13.803 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.803 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.803 Registering asynchronous event callbacks... 00:13:13.803 Starting namespace attribute notice tests for all controllers... 00:13:13.803 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:13.803 aer_cb - Changed Namespace 00:13:13.803 Cleaning up... 00:13:13.803 [ 00:13:13.803 { 00:13:13.803 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:13.803 "subtype": "Discovery", 00:13:13.803 "listen_addresses": [], 00:13:13.803 "allow_any_host": true, 00:13:13.803 "hosts": [] 00:13:13.803 }, 00:13:13.803 { 00:13:13.803 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:13.803 "subtype": "NVMe", 00:13:13.803 "listen_addresses": [ 00:13:13.803 { 00:13:13.803 "trtype": "VFIOUSER", 00:13:13.803 "adrfam": "IPv4", 00:13:13.803 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:13.803 "trsvcid": "0" 00:13:13.803 } 00:13:13.803 ], 00:13:13.803 "allow_any_host": true, 00:13:13.803 "hosts": [], 00:13:13.803 "serial_number": "SPDK1", 00:13:13.803 "model_number": "SPDK bdev Controller", 00:13:13.803 "max_namespaces": 32, 00:13:13.803 "min_cntlid": 1, 00:13:13.803 "max_cntlid": 65519, 00:13:13.803 "namespaces": [ 00:13:13.803 { 00:13:13.803 "nsid": 1, 00:13:13.803 "bdev_name": "Malloc1", 00:13:13.803 "name": "Malloc1", 00:13:13.803 "nguid": "197292EC4E8F425D84674808B2EC9EB9", 00:13:13.803 "uuid": "197292ec-4e8f-425d-8467-4808b2ec9eb9" 00:13:13.803 }, 00:13:13.803 { 00:13:13.803 "nsid": 2, 00:13:13.803 "bdev_name": "Malloc3", 00:13:13.803 "name": "Malloc3", 00:13:13.803 "nguid": "689A4764DC534B55AA17C7B59467E79C", 00:13:13.803 "uuid": "689a4764-dc53-4b55-aa17-c7b59467e79c" 00:13:13.803 } 00:13:13.803 ] 00:13:13.803 }, 00:13:13.803 { 00:13:13.803 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:13.803 "subtype": "NVMe", 00:13:13.803 "listen_addresses": [ 00:13:13.803 { 00:13:13.803 "trtype": "VFIOUSER", 00:13:13.803 "adrfam": "IPv4", 00:13:13.803 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:13.803 "trsvcid": "0" 00:13:13.803 } 00:13:13.803 ], 00:13:13.803 "allow_any_host": true, 00:13:13.803 "hosts": [], 00:13:13.803 "serial_number": "SPDK2", 00:13:13.803 "model_number": "SPDK bdev Controller", 00:13:13.803 "max_namespaces": 32, 00:13:13.803 "min_cntlid": 1, 00:13:13.803 "max_cntlid": 65519, 00:13:13.803 "namespaces": [ 00:13:13.803 { 00:13:13.803 "nsid": 1, 00:13:13.803 "bdev_name": "Malloc2", 00:13:13.803 "name": "Malloc2", 00:13:13.803 "nguid": "AB40501464C748B0A0BCFD5C1E4562E0", 00:13:13.803 "uuid": "ab405014-64c7-48b0-a0bc-fd5c1e4562e0" 00:13:13.803 }, 00:13:13.803 { 00:13:13.803 "nsid": 2, 00:13:13.803 "bdev_name": "Malloc4", 00:13:13.803 "name": "Malloc4", 00:13:13.803 "nguid": "2E96E003F98B4744968B5B0D13185809", 00:13:13.803 "uuid": "2e96e003-f98b-4744-968b-5b0d13185809" 00:13:13.803 } 00:13:13.803 ] 00:13:13.803 } 00:13:13.803 ] 00:13:13.803 22:10:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2687453 00:13:13.803 22:10:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:13.803 22:10:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2678359 00:13:13.803 22:10:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2678359 ']' 00:13:13.803 22:10:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2678359 00:13:13.803 22:10:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:13.803 22:10:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:13.803 22:10:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2678359 00:13:13.803 22:10:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:13.803 22:10:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:13.803 22:10:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2678359' 00:13:13.803 killing process with pid 2678359 00:13:13.803 22:10:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2678359 00:13:13.803 22:10:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2678359 00:13:14.065 22:10:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:14.065 22:10:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:14.065 22:10:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:14.065 22:10:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:14.065 22:10:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:14.065 22:10:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2687731 00:13:14.065 22:10:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2687731' 00:13:14.065 Process pid: 2687731 00:13:14.065 22:10:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:14.065 22:10:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:14.065 22:10:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2687731 00:13:14.065 22:10:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2687731 ']' 00:13:14.065 22:10:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.065 22:10:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.065 22:10:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.065 22:10:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.065 22:10:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:14.065 [2024-07-15 22:10:39.315389] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:14.065 [2024-07-15 22:10:39.316323] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:13:14.065 [2024-07-15 22:10:39.316370] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.065 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.065 [2024-07-15 22:10:39.377704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.328 [2024-07-15 22:10:39.443311] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.328 [2024-07-15 22:10:39.443349] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.328 [2024-07-15 22:10:39.443356] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.328 [2024-07-15 22:10:39.443363] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.328 [2024-07-15 22:10:39.443368] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.328 [2024-07-15 22:10:39.443507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.328 [2024-07-15 22:10:39.443630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.328 [2024-07-15 22:10:39.443790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.328 [2024-07-15 22:10:39.443791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.328 [2024-07-15 22:10:39.508517] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:14.328 [2024-07-15 22:10:39.508590] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:14.328 [2024-07-15 22:10:39.509682] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:14.328 [2024-07-15 22:10:39.510056] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:14.328 [2024-07-15 22:10:39.510162] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:14.906 22:10:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.906 22:10:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:14.906 22:10:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:15.849 22:10:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:16.110 22:10:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:16.110 22:10:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:16.110 22:10:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:16.110 22:10:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:16.110 22:10:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:16.110 Malloc1 00:13:16.370 22:10:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:16.370 22:10:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:16.631 22:10:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:16.631 22:10:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:16.631 22:10:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:16.631 22:10:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:16.891 Malloc2 00:13:16.891 22:10:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:17.153 22:10:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:17.153 22:10:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:17.413 22:10:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:17.413 22:10:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2687731 00:13:17.413 22:10:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2687731 ']' 00:13:17.413 22:10:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2687731 00:13:17.413 22:10:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:17.413 22:10:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:17.413 22:10:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2687731 00:13:17.413 22:10:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:17.413 22:10:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:17.413 22:10:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2687731' 00:13:17.413 killing process with pid 2687731 00:13:17.413 22:10:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2687731 00:13:17.413 22:10:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2687731 00:13:17.675 22:10:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:17.675 22:10:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:17.675 00:13:17.675 real 0m50.476s 00:13:17.675 user 3m19.916s 00:13:17.675 sys 0m3.109s 00:13:17.675 22:10:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.675 22:10:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:17.675 ************************************ 00:13:17.675 END TEST nvmf_vfio_user 00:13:17.675 ************************************ 00:13:17.675 22:10:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:17.675 22:10:42 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:17.675 22:10:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:17.675 22:10:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.675 22:10:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:17.675 ************************************ 00:13:17.675 START TEST nvmf_vfio_user_nvme_compliance 00:13:17.675 ************************************ 00:13:17.675 22:10:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:17.936 * Looking for test storage... 00:13:17.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:17.936 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.936 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:17.936 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.936 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.936 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.936 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.936 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.936 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.936 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.936 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.936 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.936 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.936 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:17.936 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:17.936 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2688532 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2688532' 00:13:17.937 Process pid: 2688532 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2688532 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2688532 ']' 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:17.937 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:17.937 [2024-07-15 22:10:43.098659] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:13:17.937 [2024-07-15 22:10:43.098718] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.937 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.937 [2024-07-15 22:10:43.164059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:17.937 [2024-07-15 22:10:43.234892] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.937 [2024-07-15 22:10:43.234931] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.937 [2024-07-15 22:10:43.234939] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.937 [2024-07-15 22:10:43.234945] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.937 [2024-07-15 22:10:43.234951] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.937 [2024-07-15 22:10:43.235102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.937 [2024-07-15 22:10:43.235212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.937 [2024-07-15 22:10:43.235384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.876 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:18.876 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:18.876 22:10:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:19.818 malloc0 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.818 22:10:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:19.818 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.818 00:13:19.818 00:13:19.818 CUnit - A unit testing framework for C - Version 2.1-3 00:13:19.818 http://cunit.sourceforge.net/ 00:13:19.818 00:13:19.818 00:13:19.818 Suite: nvme_compliance 00:13:19.818 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 22:10:45.136608] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.818 [2024-07-15 22:10:45.137952] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:19.818 [2024-07-15 22:10:45.137962] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:19.818 [2024-07-15 22:10:45.137967] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:19.818 [2024-07-15 22:10:45.139629] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.078 passed 00:13:20.078 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 22:10:45.232194] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.078 [2024-07-15 22:10:45.235214] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.078 passed 00:13:20.078 Test: admin_identify_ns ...[2024-07-15 22:10:45.330370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.078 [2024-07-15 22:10:45.394137] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:20.078 [2024-07-15 22:10:45.402137] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:20.338 [2024-07-15 22:10:45.423236] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.338 passed 00:13:20.338 Test: admin_get_features_mandatory_features ...[2024-07-15 22:10:45.518956] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.338 [2024-07-15 22:10:45.521972] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.338 passed 00:13:20.338 Test: admin_get_features_optional_features ...[2024-07-15 22:10:45.616514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.338 [2024-07-15 22:10:45.619531] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.338 passed 00:13:20.597 Test: admin_set_features_number_of_queues ...[2024-07-15 22:10:45.712671] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.597 [2024-07-15 22:10:45.818239] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.597 passed 00:13:20.597 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 22:10:45.913983] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.597 [2024-07-15 22:10:45.917001] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.857 passed 00:13:20.857 Test: admin_get_log_page_with_lpo ...[2024-07-15 22:10:46.010369] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.857 [2024-07-15 22:10:46.077137] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:20.857 [2024-07-15 22:10:46.090198] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.857 passed 00:13:21.117 Test: fabric_property_get ...[2024-07-15 22:10:46.183276] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.117 [2024-07-15 22:10:46.184523] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:21.117 [2024-07-15 22:10:46.186292] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.117 passed 00:13:21.117 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 22:10:46.278773] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.117 [2024-07-15 22:10:46.280036] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:21.117 [2024-07-15 22:10:46.282801] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.117 passed 00:13:21.117 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 22:10:46.374947] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.376 [2024-07-15 22:10:46.458129] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:21.376 [2024-07-15 22:10:46.474132] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:21.376 [2024-07-15 22:10:46.479201] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.376 passed 00:13:21.376 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 22:10:46.572798] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.376 [2024-07-15 22:10:46.574042] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:21.376 [2024-07-15 22:10:46.575815] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.376 passed 00:13:21.376 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 22:10:46.668949] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.636 [2024-07-15 22:10:46.744129] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:21.636 [2024-07-15 22:10:46.768129] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:21.636 [2024-07-15 22:10:46.773218] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.636 passed 00:13:21.636 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 22:10:46.866835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.636 [2024-07-15 22:10:46.868073] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:21.636 [2024-07-15 22:10:46.868093] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:21.636 [2024-07-15 22:10:46.869854] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.636 passed 00:13:21.896 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 22:10:46.962956] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.896 [2024-07-15 22:10:47.054134] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:21.896 [2024-07-15 22:10:47.062133] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:21.896 [2024-07-15 22:10:47.070137] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:21.896 [2024-07-15 22:10:47.078130] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:21.896 [2024-07-15 22:10:47.107212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.896 passed 00:13:21.896 Test: admin_create_io_sq_verify_pc ...[2024-07-15 22:10:47.200817] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.896 [2024-07-15 22:10:47.215138] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:22.156 [2024-07-15 22:10:47.232971] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.156 passed 00:13:22.156 Test: admin_create_io_qp_max_qps ...[2024-07-15 22:10:47.328522] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.536 [2024-07-15 22:10:48.441134] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:23.536 [2024-07-15 22:10:48.832424] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.795 passed 00:13:23.795 Test: admin_create_io_sq_shared_cq ...[2024-07-15 22:10:48.926362] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.795 [2024-07-15 22:10:49.058130] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:23.795 [2024-07-15 22:10:49.095194] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.056 passed 00:13:24.056 00:13:24.056 Run Summary: Type Total Ran Passed Failed Inactive 00:13:24.056 suites 1 1 n/a 0 0 00:13:24.056 tests 18 18 18 0 0 00:13:24.056 asserts 360 360 360 0 n/a 00:13:24.056 00:13:24.056 Elapsed time = 1.662 seconds 00:13:24.056 22:10:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2688532 00:13:24.056 22:10:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2688532 ']' 00:13:24.056 22:10:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2688532 00:13:24.056 22:10:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:24.056 22:10:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:24.056 22:10:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2688532 00:13:24.056 22:10:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:24.056 22:10:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:24.056 22:10:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2688532' 00:13:24.056 killing process with pid 2688532 00:13:24.056 22:10:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2688532 00:13:24.056 22:10:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2688532 00:13:24.056 22:10:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:24.056 22:10:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:24.056 00:13:24.056 real 0m6.432s 00:13:24.056 user 0m18.461s 00:13:24.056 sys 0m0.438s 00:13:24.056 22:10:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:24.056 22:10:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:24.056 ************************************ 00:13:24.056 END TEST nvmf_vfio_user_nvme_compliance 00:13:24.056 ************************************ 00:13:24.321 22:10:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:24.321 22:10:49 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:24.321 22:10:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:24.321 22:10:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.321 22:10:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:24.321 ************************************ 00:13:24.321 START TEST nvmf_vfio_user_fuzz 00:13:24.321 ************************************ 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:24.321 * Looking for test storage... 00:13:24.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2689873 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2689873' 00:13:24.321 Process pid: 2689873 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2689873 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2689873 ']' 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:24.321 22:10:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:25.292 22:10:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:25.292 22:10:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:25.292 22:10:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:26.234 malloc0 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:26.234 22:10:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:58.345 Fuzzing completed. Shutting down the fuzz application 00:13:58.345 00:13:58.345 Dumping successful admin opcodes: 00:13:58.345 8, 9, 10, 24, 00:13:58.345 Dumping successful io opcodes: 00:13:58.345 0, 00:13:58.345 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1230277, total successful commands: 4831, random_seed: 4069749120 00:13:58.345 NS: 0x200003a1ef00 admin qp, Total commands completed: 154546, total successful commands: 1249, random_seed: 2201706688 00:13:58.345 22:11:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:58.345 22:11:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.345 22:11:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:58.345 22:11:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.345 22:11:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2689873 00:13:58.345 22:11:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2689873 ']' 00:13:58.345 22:11:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2689873 00:13:58.345 22:11:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:58.345 22:11:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:58.345 22:11:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2689873 00:13:58.345 22:11:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:58.345 22:11:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:58.345 22:11:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2689873' 00:13:58.345 killing process with pid 2689873 00:13:58.345 22:11:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2689873 00:13:58.345 22:11:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2689873 00:13:58.345 22:11:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:58.345 22:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:58.345 00:13:58.345 real 0m33.646s 00:13:58.345 user 0m40.615s 00:13:58.345 sys 0m23.032s 00:13:58.345 22:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:58.345 22:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:58.345 ************************************ 00:13:58.345 END TEST nvmf_vfio_user_fuzz 00:13:58.345 ************************************ 00:13:58.345 22:11:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:58.345 22:11:23 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:58.345 22:11:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:58.345 22:11:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.345 22:11:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:58.345 ************************************ 00:13:58.345 START TEST nvmf_host_management 00:13:58.345 ************************************ 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:58.345 * Looking for test storage... 00:13:58.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.345 22:11:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:58.346 22:11:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.933 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:04.934 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:04.934 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:04.934 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:04.934 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.934 22:11:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:04.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:14:04.934 00:14:04.934 --- 10.0.0.2 ping statistics --- 00:14:04.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.934 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:14:04.934 00:14:04.934 --- 10.0.0.1 ping statistics --- 00:14:04.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.934 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:04.934 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:05.195 22:11:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:05.195 22:11:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:05.195 22:11:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:05.195 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:05.195 22:11:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:05.195 22:11:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:05.195 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2699914 00:14:05.195 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2699914 00:14:05.195 22:11:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:05.195 22:11:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2699914 ']' 00:14:05.195 22:11:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.195 22:11:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:05.195 22:11:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.195 22:11:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:05.195 22:11:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:05.195 [2024-07-15 22:11:30.338413] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:14:05.195 [2024-07-15 22:11:30.338479] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.195 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.195 [2024-07-15 22:11:30.426589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.456 [2024-07-15 22:11:30.520036] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.456 [2024-07-15 22:11:30.520090] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.456 [2024-07-15 22:11:30.520098] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.456 [2024-07-15 22:11:30.520104] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.456 [2024-07-15 22:11:30.520110] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.456 [2024-07-15 22:11:30.520249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.456 [2024-07-15 22:11:30.520385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.456 [2024-07-15 22:11:30.520441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.456 [2024-07-15 22:11:30.520441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.026 [2024-07-15 22:11:31.152781] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.026 Malloc0 00:14:06.026 [2024-07-15 22:11:31.216136] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2700285 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2700285 /var/tmp/bdevperf.sock 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2700285 ']' 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:06.026 { 00:14:06.026 "params": { 00:14:06.026 "name": "Nvme$subsystem", 00:14:06.026 "trtype": "$TEST_TRANSPORT", 00:14:06.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:06.026 "adrfam": "ipv4", 00:14:06.026 "trsvcid": "$NVMF_PORT", 00:14:06.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:06.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:06.026 "hdgst": ${hdgst:-false}, 00:14:06.026 "ddgst": ${ddgst:-false} 00:14:06.026 }, 00:14:06.026 "method": "bdev_nvme_attach_controller" 00:14:06.026 } 00:14:06.026 EOF 00:14:06.026 )") 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:06.026 22:11:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:06.026 "params": { 00:14:06.026 "name": "Nvme0", 00:14:06.026 "trtype": "tcp", 00:14:06.026 "traddr": "10.0.0.2", 00:14:06.026 "adrfam": "ipv4", 00:14:06.026 "trsvcid": "4420", 00:14:06.026 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:06.026 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:06.026 "hdgst": false, 00:14:06.026 "ddgst": false 00:14:06.026 }, 00:14:06.026 "method": "bdev_nvme_attach_controller" 00:14:06.026 }' 00:14:06.026 [2024-07-15 22:11:31.325863] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:14:06.026 [2024-07-15 22:11:31.325931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2700285 ] 00:14:06.288 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.288 [2024-07-15 22:11:31.386923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.288 [2024-07-15 22:11:31.451235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.549 Running I/O for 10 seconds... 00:14:06.810 22:11:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.810 22:11:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:06.810 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:06.810 22:11:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.810 22:11:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.810 22:11:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.810 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:06.811 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:06.811 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:06.811 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:06.811 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:06.811 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:06.811 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:06.811 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:06.811 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:06.811 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:06.811 22:11:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.811 22:11:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.811 22:11:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.074 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:14:07.074 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:14:07.074 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:07.074 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:07.074 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:07.074 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:07.074 22:11:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.074 22:11:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:07.074 [2024-07-15 22:11:32.159144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c29e40 is same with the state(5) to be set 00:14:07.074 [2024-07-15 22:11:32.159184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c29e40 is same with the state(5) to be set 00:14:07.074 [2024-07-15 22:11:32.159192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c29e40 is same with the state(5) to be set 00:14:07.074 [2024-07-15 22:11:32.159199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c29e40 is same with the state(5) to be set 00:14:07.074 [2024-07-15 22:11:32.159211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c29e40 is same with the state(5) to be set 00:14:07.074 [2024-07-15 22:11:32.159849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.159886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.159902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.159910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.159920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.159928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.159938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.159945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.159955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.159962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.159972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.159980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.159990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.159997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.074 [2024-07-15 22:11:32.160332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.074 [2024-07-15 22:11:32.160342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.075 [2024-07-15 22:11:32.160984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.075 [2024-07-15 22:11:32.160993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef64f0 is same with the state(5) to be set 00:14:07.075 [2024-07-15 22:11:32.161032] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xef64f0 was disconnected and freed. reset controller. 00:14:07.075 [2024-07-15 22:11:32.162211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:07.075 task offset: 59136 on job bdev=Nvme0n1 fails 00:14:07.075 00:14:07.075 Latency(us) 00:14:07.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.075 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:07.075 Job: Nvme0n1 ended in about 0.44 seconds with error 00:14:07.075 Verification LBA range: start 0x0 length 0x400 00:14:07.075 Nvme0n1 : 0.44 1035.31 64.71 146.59 0.00 52655.62 1570.13 48278.19 00:14:07.075 =================================================================================================================== 00:14:07.075 Total : 1035.31 64.71 146.59 0.00 52655.62 1570.13 48278.19 00:14:07.076 22:11:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.076 [2024-07-15 22:11:32.164217] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:07.076 [2024-07-15 22:11:32.164241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae53b0 (9): Bad file descriptor 00:14:07.076 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:07.076 22:11:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.076 22:11:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:07.076 [2024-07-15 22:11:32.168883] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:14:07.076 [2024-07-15 22:11:32.168986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:07.076 [2024-07-15 22:11:32.169007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.076 [2024-07-15 22:11:32.169023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:14:07.076 [2024-07-15 22:11:32.169030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:14:07.076 [2024-07-15 22:11:32.169038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:14:07.076 [2024-07-15 22:11:32.169044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae53b0 00:14:07.076 [2024-07-15 22:11:32.169063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae53b0 (9): Bad file descriptor 00:14:07.076 [2024-07-15 22:11:32.169075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:14:07.076 [2024-07-15 22:11:32.169083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:14:07.076 [2024-07-15 22:11:32.169099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:14:07.076 [2024-07-15 22:11:32.169111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:07.076 22:11:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.076 22:11:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:08.018 22:11:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2700285 00:14:08.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2700285) - No such process 00:14:08.019 22:11:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:08.019 22:11:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:08.019 22:11:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:08.019 22:11:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:08.019 22:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:08.019 22:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:08.019 22:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:08.019 22:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:08.019 { 00:14:08.019 "params": { 00:14:08.019 "name": "Nvme$subsystem", 00:14:08.019 "trtype": "$TEST_TRANSPORT", 00:14:08.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:08.019 "adrfam": "ipv4", 00:14:08.019 "trsvcid": "$NVMF_PORT", 00:14:08.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:08.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:08.019 "hdgst": ${hdgst:-false}, 00:14:08.019 "ddgst": ${ddgst:-false} 00:14:08.019 }, 00:14:08.019 "method": "bdev_nvme_attach_controller" 00:14:08.019 } 00:14:08.019 EOF 00:14:08.019 )") 00:14:08.019 22:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:08.019 22:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:08.019 22:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:08.019 22:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:08.019 "params": { 00:14:08.019 "name": "Nvme0", 00:14:08.019 "trtype": "tcp", 00:14:08.019 "traddr": "10.0.0.2", 00:14:08.019 "adrfam": "ipv4", 00:14:08.019 "trsvcid": "4420", 00:14:08.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:08.019 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:08.019 "hdgst": false, 00:14:08.019 "ddgst": false 00:14:08.019 }, 00:14:08.019 "method": "bdev_nvme_attach_controller" 00:14:08.019 }' 00:14:08.019 [2024-07-15 22:11:33.235147] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:14:08.019 [2024-07-15 22:11:33.235205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2700637 ] 00:14:08.019 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.019 [2024-07-15 22:11:33.293992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.278 [2024-07-15 22:11:33.358301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.538 Running I/O for 1 seconds... 00:14:09.479 00:14:09.479 Latency(us) 00:14:09.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.479 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:09.479 Verification LBA range: start 0x0 length 0x400 00:14:09.479 Nvme0n1 : 1.03 1057.86 66.12 0.00 0.00 59629.98 14636.37 50244.27 00:14:09.479 =================================================================================================================== 00:14:09.479 Total : 1057.86 66.12 0.00 0.00 59629.98 14636.37 50244.27 00:14:09.479 22:11:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:09.479 22:11:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:09.479 22:11:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:09.479 22:11:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:09.479 22:11:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:09.479 22:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:09.740 rmmod nvme_tcp 00:14:09.740 rmmod nvme_fabrics 00:14:09.740 rmmod nvme_keyring 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2699914 ']' 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2699914 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2699914 ']' 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2699914 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2699914 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2699914' 00:14:09.740 killing process with pid 2699914 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2699914 00:14:09.740 22:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2699914 00:14:09.740 [2024-07-15 22:11:35.054257] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:10.001 22:11:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:10.001 22:11:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:10.001 22:11:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:10.001 22:11:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.001 22:11:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:10.001 22:11:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.001 22:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.001 22:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.916 22:11:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:11.916 22:11:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:11.916 00:14:11.916 real 0m14.002s 00:14:11.916 user 0m22.745s 00:14:11.916 sys 0m6.107s 00:14:11.916 22:11:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:11.916 22:11:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:11.916 ************************************ 00:14:11.916 END TEST nvmf_host_management 00:14:11.916 ************************************ 00:14:11.916 22:11:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:11.916 22:11:37 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:11.916 22:11:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:11.916 22:11:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.916 22:11:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:11.916 ************************************ 00:14:11.916 START TEST nvmf_lvol 00:14:11.916 ************************************ 00:14:11.916 22:11:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:12.178 * Looking for test storage... 00:14:12.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:12.178 22:11:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:18.819 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:18.819 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:18.819 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:18.819 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.819 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:18.820 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:18.820 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.820 22:11:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.820 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.820 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.820 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:18.820 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:19.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:14:19.080 00:14:19.080 --- 10.0.0.2 ping statistics --- 00:14:19.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.080 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:19.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:14:19.080 00:14:19.080 --- 10.0.0.1 ping statistics --- 00:14:19.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.080 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2704976 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2704976 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2704976 ']' 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.080 22:11:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:19.080 [2024-07-15 22:11:44.289217] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:14:19.080 [2024-07-15 22:11:44.289267] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.080 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.080 [2024-07-15 22:11:44.357474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:19.341 [2024-07-15 22:11:44.422595] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.341 [2024-07-15 22:11:44.422627] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.341 [2024-07-15 22:11:44.422635] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.341 [2024-07-15 22:11:44.422642] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.341 [2024-07-15 22:11:44.422648] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.341 [2024-07-15 22:11:44.426140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.341 [2024-07-15 22:11:44.426228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.341 [2024-07-15 22:11:44.426396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.341 22:11:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.341 22:11:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:19.341 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:19.341 22:11:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:19.341 22:11:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:19.341 22:11:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.341 22:11:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:19.602 [2024-07-15 22:11:44.696923] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.602 22:11:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:19.602 22:11:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:19.602 22:11:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:19.863 22:11:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:19.863 22:11:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:20.124 22:11:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:20.385 22:11:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9cbb1b27-36fe-451e-8ae3-aeae2832ff94 00:14:20.385 22:11:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9cbb1b27-36fe-451e-8ae3-aeae2832ff94 lvol 20 00:14:20.385 22:11:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5ccc47b6-a2bc-43d2-b15d-16e1fdb5e37d 00:14:20.385 22:11:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:20.645 22:11:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5ccc47b6-a2bc-43d2-b15d-16e1fdb5e37d 00:14:20.645 22:11:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:20.906 [2024-07-15 22:11:46.109226] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.906 22:11:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:21.167 22:11:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2705424 00:14:21.167 22:11:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:21.167 22:11:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:21.167 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.110 22:11:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5ccc47b6-a2bc-43d2-b15d-16e1fdb5e37d MY_SNAPSHOT 00:14:22.372 22:11:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7846d335-b273-4e06-9333-03b3497052c2 00:14:22.372 22:11:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5ccc47b6-a2bc-43d2-b15d-16e1fdb5e37d 30 00:14:22.633 22:11:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7846d335-b273-4e06-9333-03b3497052c2 MY_CLONE 00:14:22.633 22:11:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=28394add-329a-479b-b4b3-46816cb8afc5 00:14:22.633 22:11:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 28394add-329a-479b-b4b3-46816cb8afc5 00:14:23.204 22:11:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2705424 00:14:31.339 Initializing NVMe Controllers 00:14:31.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:31.339 Controller IO queue size 128, less than required. 00:14:31.339 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:31.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:31.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:31.339 Initialization complete. Launching workers. 00:14:31.339 ======================================================== 00:14:31.339 Latency(us) 00:14:31.339 Device Information : IOPS MiB/s Average min max 00:14:31.339 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12634.50 49.35 10133.57 1306.51 60842.85 00:14:31.339 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15000.80 58.60 8534.93 3245.82 61032.89 00:14:31.339 ======================================================== 00:14:31.339 Total : 27635.30 107.95 9265.81 1306.51 61032.89 00:14:31.339 00:14:31.339 22:11:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:31.600 22:11:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5ccc47b6-a2bc-43d2-b15d-16e1fdb5e37d 00:14:31.861 22:11:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9cbb1b27-36fe-451e-8ae3-aeae2832ff94 00:14:31.861 22:11:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:31.861 22:11:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:31.861 22:11:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:31.861 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:31.861 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:31.861 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:31.861 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:31.861 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.861 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:31.861 rmmod nvme_tcp 00:14:31.861 rmmod nvme_fabrics 00:14:31.861 rmmod nvme_keyring 00:14:31.861 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.122 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:32.122 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:32.122 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2704976 ']' 00:14:32.122 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2704976 00:14:32.122 22:11:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2704976 ']' 00:14:32.122 22:11:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2704976 00:14:32.122 22:11:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:32.122 22:11:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:32.122 22:11:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2704976 00:14:32.122 22:11:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:32.122 22:11:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:32.122 22:11:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2704976' 00:14:32.122 killing process with pid 2704976 00:14:32.122 22:11:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2704976 00:14:32.122 22:11:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2704976 00:14:32.123 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:32.123 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:32.123 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:32.123 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.123 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:32.123 22:11:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.123 22:11:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.123 22:11:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.659 22:11:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:34.659 00:14:34.659 real 0m22.252s 00:14:34.659 user 0m58.911s 00:14:34.659 sys 0m8.873s 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:34.660 ************************************ 00:14:34.660 END TEST nvmf_lvol 00:14:34.660 ************************************ 00:14:34.660 22:11:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:34.660 22:11:59 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:34.660 22:11:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:34.660 22:11:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.660 22:11:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:34.660 ************************************ 00:14:34.660 START TEST nvmf_lvs_grow 00:14:34.660 ************************************ 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:34.660 * Looking for test storage... 00:14:34.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:34.660 22:11:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:41.237 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.237 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:41.237 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:41.238 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:41.238 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.238 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:41.497 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:41.498 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.498 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.498 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.498 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.498 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:41.498 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.498 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.498 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.757 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:41.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:14:41.758 00:14:41.758 --- 10.0.0.2 ping statistics --- 00:14:41.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.758 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:14:41.758 00:14:41.758 --- 10.0.0.1 ping statistics --- 00:14:41.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.758 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2711810 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2711810 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2711810 ']' 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:41.758 22:12:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:41.758 [2024-07-15 22:12:06.946019] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:14:41.758 [2024-07-15 22:12:06.946076] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.758 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.758 [2024-07-15 22:12:07.014875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.017 [2024-07-15 22:12:07.082666] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.017 [2024-07-15 22:12:07.082702] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.017 [2024-07-15 22:12:07.082709] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.017 [2024-07-15 22:12:07.082716] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.017 [2024-07-15 22:12:07.082721] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.017 [2024-07-15 22:12:07.082742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.587 22:12:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:42.587 22:12:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:42.587 22:12:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:42.587 22:12:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:42.587 22:12:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:42.587 22:12:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.587 22:12:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:42.587 [2024-07-15 22:12:07.901747] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.848 22:12:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:42.848 22:12:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:42.848 22:12:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.848 22:12:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:42.848 ************************************ 00:14:42.848 START TEST lvs_grow_clean 00:14:42.848 ************************************ 00:14:42.848 22:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:42.848 22:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:42.848 22:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:42.848 22:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:42.848 22:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:42.848 22:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:42.848 22:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:42.848 22:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:42.848 22:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:42.848 22:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:42.848 22:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:42.848 22:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:43.108 22:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=228560a9-3076-413a-a958-40d44e7ffdb8 00:14:43.108 22:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 228560a9-3076-413a-a958-40d44e7ffdb8 00:14:43.108 22:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:43.369 22:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:43.369 22:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:43.369 22:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 228560a9-3076-413a-a958-40d44e7ffdb8 lvol 150 00:14:43.369 22:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4a9f2e60-465c-493c-acf4-679c7dadb00d 00:14:43.369 22:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:43.369 22:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:43.628 [2024-07-15 22:12:08.810229] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:43.628 [2024-07-15 22:12:08.810281] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:43.628 true 00:14:43.628 22:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 228560a9-3076-413a-a958-40d44e7ffdb8 00:14:43.628 22:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:43.888 22:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:43.889 22:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:43.889 22:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4a9f2e60-465c-493c-acf4-679c7dadb00d 00:14:44.149 22:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:44.149 [2024-07-15 22:12:09.432136] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.149 22:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:44.446 22:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2712702 00:14:44.446 22:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:44.446 22:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2712702 /var/tmp/bdevperf.sock 00:14:44.446 22:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2712702 ']' 00:14:44.446 22:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.446 22:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.446 22:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.446 22:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.446 22:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:44.446 22:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:44.446 [2024-07-15 22:12:09.661440] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:14:44.446 [2024-07-15 22:12:09.661491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2712702 ] 00:14:44.446 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.446 [2024-07-15 22:12:09.735642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.727 [2024-07-15 22:12:09.800080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.299 22:12:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.299 22:12:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:45.299 22:12:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:45.560 Nvme0n1 00:14:45.560 22:12:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:45.821 [ 00:14:45.821 { 00:14:45.821 "name": "Nvme0n1", 00:14:45.821 "aliases": [ 00:14:45.821 "4a9f2e60-465c-493c-acf4-679c7dadb00d" 00:14:45.821 ], 00:14:45.821 "product_name": "NVMe disk", 00:14:45.821 "block_size": 4096, 00:14:45.821 "num_blocks": 38912, 00:14:45.821 "uuid": "4a9f2e60-465c-493c-acf4-679c7dadb00d", 00:14:45.821 "assigned_rate_limits": { 00:14:45.821 "rw_ios_per_sec": 0, 00:14:45.821 "rw_mbytes_per_sec": 0, 00:14:45.821 "r_mbytes_per_sec": 0, 00:14:45.821 "w_mbytes_per_sec": 0 00:14:45.821 }, 00:14:45.821 "claimed": false, 00:14:45.821 "zoned": false, 00:14:45.821 "supported_io_types": { 00:14:45.821 "read": true, 00:14:45.821 "write": true, 00:14:45.821 "unmap": true, 00:14:45.821 "flush": true, 00:14:45.821 "reset": true, 00:14:45.821 "nvme_admin": true, 00:14:45.821 "nvme_io": true, 00:14:45.821 "nvme_io_md": false, 00:14:45.821 "write_zeroes": true, 00:14:45.821 "zcopy": false, 00:14:45.821 "get_zone_info": false, 00:14:45.821 "zone_management": false, 00:14:45.821 "zone_append": false, 00:14:45.821 "compare": true, 00:14:45.821 "compare_and_write": true, 00:14:45.821 "abort": true, 00:14:45.821 "seek_hole": false, 00:14:45.821 "seek_data": false, 00:14:45.821 "copy": true, 00:14:45.821 "nvme_iov_md": false 00:14:45.821 }, 00:14:45.821 "memory_domains": [ 00:14:45.821 { 00:14:45.821 "dma_device_id": "system", 00:14:45.821 "dma_device_type": 1 00:14:45.821 } 00:14:45.821 ], 00:14:45.821 "driver_specific": { 00:14:45.821 "nvme": [ 00:14:45.821 { 00:14:45.821 "trid": { 00:14:45.821 "trtype": "TCP", 00:14:45.821 "adrfam": "IPv4", 00:14:45.821 "traddr": "10.0.0.2", 00:14:45.821 "trsvcid": "4420", 00:14:45.821 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:45.821 }, 00:14:45.821 "ctrlr_data": { 00:14:45.821 "cntlid": 1, 00:14:45.821 "vendor_id": "0x8086", 00:14:45.821 "model_number": "SPDK bdev Controller", 00:14:45.821 "serial_number": "SPDK0", 00:14:45.821 "firmware_revision": "24.09", 00:14:45.821 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:45.821 "oacs": { 00:14:45.821 "security": 0, 00:14:45.821 "format": 0, 00:14:45.821 "firmware": 0, 00:14:45.821 "ns_manage": 0 00:14:45.821 }, 00:14:45.821 "multi_ctrlr": true, 00:14:45.821 "ana_reporting": false 00:14:45.821 }, 00:14:45.821 "vs": { 00:14:45.821 "nvme_version": "1.3" 00:14:45.821 }, 00:14:45.821 "ns_data": { 00:14:45.821 "id": 1, 00:14:45.821 "can_share": true 00:14:45.821 } 00:14:45.821 } 00:14:45.821 ], 00:14:45.821 "mp_policy": "active_passive" 00:14:45.821 } 00:14:45.821 } 00:14:45.821 ] 00:14:45.821 22:12:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2713231 00:14:45.821 22:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:45.821 22:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:45.821 Running I/O for 10 seconds... 00:14:46.765 Latency(us) 00:14:46.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.765 Nvme0n1 : 1.00 18123.00 70.79 0.00 0.00 0.00 0.00 0.00 00:14:46.765 =================================================================================================================== 00:14:46.765 Total : 18123.00 70.79 0.00 0.00 0.00 0.00 0.00 00:14:46.765 00:14:47.707 22:12:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 228560a9-3076-413a-a958-40d44e7ffdb8 00:14:47.968 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.968 Nvme0n1 : 2.00 18190.00 71.05 0.00 0.00 0.00 0.00 0.00 00:14:47.968 =================================================================================================================== 00:14:47.968 Total : 18190.00 71.05 0.00 0.00 0.00 0.00 0.00 00:14:47.968 00:14:47.968 true 00:14:47.968 22:12:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 228560a9-3076-413a-a958-40d44e7ffdb8 00:14:47.968 22:12:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:48.228 22:12:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:48.228 22:12:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:48.228 22:12:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2713231 00:14:48.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.800 Nvme0n1 : 3.00 18249.33 71.29 0.00 0.00 0.00 0.00 0.00 00:14:48.800 =================================================================================================================== 00:14:48.800 Total : 18249.33 71.29 0.00 0.00 0.00 0.00 0.00 00:14:48.800 00:14:50.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.186 Nvme0n1 : 4.00 18274.50 71.38 0.00 0.00 0.00 0.00 0.00 00:14:50.186 =================================================================================================================== 00:14:50.186 Total : 18274.50 71.38 0.00 0.00 0.00 0.00 0.00 00:14:50.186 00:14:51.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.128 Nvme0n1 : 5.00 18305.80 71.51 0.00 0.00 0.00 0.00 0.00 00:14:51.128 =================================================================================================================== 00:14:51.128 Total : 18305.80 71.51 0.00 0.00 0.00 0.00 0.00 00:14:51.128 00:14:52.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.070 Nvme0n1 : 6.00 18330.00 71.60 0.00 0.00 0.00 0.00 0.00 00:14:52.070 =================================================================================================================== 00:14:52.070 Total : 18330.00 71.60 0.00 0.00 0.00 0.00 0.00 00:14:52.070 00:14:53.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.012 Nvme0n1 : 7.00 18341.86 71.65 0.00 0.00 0.00 0.00 0.00 00:14:53.012 =================================================================================================================== 00:14:53.012 Total : 18341.86 71.65 0.00 0.00 0.00 0.00 0.00 00:14:53.012 00:14:53.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.955 Nvme0n1 : 8.00 18359.50 71.72 0.00 0.00 0.00 0.00 0.00 00:14:53.955 =================================================================================================================== 00:14:53.955 Total : 18359.50 71.72 0.00 0.00 0.00 0.00 0.00 00:14:53.955 00:14:54.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.898 Nvme0n1 : 9.00 18360.44 71.72 0.00 0.00 0.00 0.00 0.00 00:14:54.898 =================================================================================================================== 00:14:54.898 Total : 18360.44 71.72 0.00 0.00 0.00 0.00 0.00 00:14:54.898 00:14:55.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.841 Nvme0n1 : 10.00 18370.80 71.76 0.00 0.00 0.00 0.00 0.00 00:14:55.841 =================================================================================================================== 00:14:55.841 Total : 18370.80 71.76 0.00 0.00 0.00 0.00 0.00 00:14:55.841 00:14:55.841 00:14:55.841 Latency(us) 00:14:55.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.841 Nvme0n1 : 10.01 18369.77 71.76 0.00 0.00 6964.16 2826.24 12178.77 00:14:55.842 =================================================================================================================== 00:14:55.842 Total : 18369.77 71.76 0.00 0.00 6964.16 2826.24 12178.77 00:14:55.842 0 00:14:55.842 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2712702 00:14:55.842 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2712702 ']' 00:14:55.842 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2712702 00:14:55.842 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:55.842 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.842 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2712702 00:14:56.102 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:56.102 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:56.102 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2712702' 00:14:56.102 killing process with pid 2712702 00:14:56.102 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2712702 00:14:56.102 Received shutdown signal, test time was about 10.000000 seconds 00:14:56.102 00:14:56.102 Latency(us) 00:14:56.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.102 =================================================================================================================== 00:14:56.102 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:56.102 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2712702 00:14:56.102 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:56.363 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:56.363 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 228560a9-3076-413a-a958-40d44e7ffdb8 00:14:56.363 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:56.624 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:56.624 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:56.624 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:56.624 [2024-07-15 22:12:21.889559] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:56.624 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 228560a9-3076-413a-a958-40d44e7ffdb8 00:14:56.624 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:56.624 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 228560a9-3076-413a-a958-40d44e7ffdb8 00:14:56.624 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.624 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.624 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.624 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.624 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.624 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.624 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.624 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:56.624 22:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 228560a9-3076-413a-a958-40d44e7ffdb8 00:14:56.885 request: 00:14:56.885 { 00:14:56.885 "uuid": "228560a9-3076-413a-a958-40d44e7ffdb8", 00:14:56.885 "method": "bdev_lvol_get_lvstores", 00:14:56.885 "req_id": 1 00:14:56.885 } 00:14:56.885 Got JSON-RPC error response 00:14:56.885 response: 00:14:56.885 { 00:14:56.885 "code": -19, 00:14:56.885 "message": "No such device" 00:14:56.885 } 00:14:56.885 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:56.885 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:56.885 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:56.885 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:56.885 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:57.145 aio_bdev 00:14:57.145 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4a9f2e60-465c-493c-acf4-679c7dadb00d 00:14:57.145 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=4a9f2e60-465c-493c-acf4-679c7dadb00d 00:14:57.145 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:57.145 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:57.145 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:57.145 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:57.145 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:57.145 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4a9f2e60-465c-493c-acf4-679c7dadb00d -t 2000 00:14:57.407 [ 00:14:57.407 { 00:14:57.407 "name": "4a9f2e60-465c-493c-acf4-679c7dadb00d", 00:14:57.407 "aliases": [ 00:14:57.407 "lvs/lvol" 00:14:57.407 ], 00:14:57.407 "product_name": "Logical Volume", 00:14:57.407 "block_size": 4096, 00:14:57.407 "num_blocks": 38912, 00:14:57.407 "uuid": "4a9f2e60-465c-493c-acf4-679c7dadb00d", 00:14:57.407 "assigned_rate_limits": { 00:14:57.407 "rw_ios_per_sec": 0, 00:14:57.407 "rw_mbytes_per_sec": 0, 00:14:57.407 "r_mbytes_per_sec": 0, 00:14:57.407 "w_mbytes_per_sec": 0 00:14:57.407 }, 00:14:57.407 "claimed": false, 00:14:57.407 "zoned": false, 00:14:57.407 "supported_io_types": { 00:14:57.407 "read": true, 00:14:57.407 "write": true, 00:14:57.407 "unmap": true, 00:14:57.407 "flush": false, 00:14:57.407 "reset": true, 00:14:57.407 "nvme_admin": false, 00:14:57.407 "nvme_io": false, 00:14:57.407 "nvme_io_md": false, 00:14:57.407 "write_zeroes": true, 00:14:57.407 "zcopy": false, 00:14:57.407 "get_zone_info": false, 00:14:57.407 "zone_management": false, 00:14:57.407 "zone_append": false, 00:14:57.407 "compare": false, 00:14:57.407 "compare_and_write": false, 00:14:57.407 "abort": false, 00:14:57.407 "seek_hole": true, 00:14:57.407 "seek_data": true, 00:14:57.407 "copy": false, 00:14:57.407 "nvme_iov_md": false 00:14:57.407 }, 00:14:57.407 "driver_specific": { 00:14:57.407 "lvol": { 00:14:57.407 "lvol_store_uuid": "228560a9-3076-413a-a958-40d44e7ffdb8", 00:14:57.407 "base_bdev": "aio_bdev", 00:14:57.407 "thin_provision": false, 00:14:57.407 "num_allocated_clusters": 38, 00:14:57.407 "snapshot": false, 00:14:57.407 "clone": false, 00:14:57.407 "esnap_clone": false 00:14:57.407 } 00:14:57.407 } 00:14:57.407 } 00:14:57.407 ] 00:14:57.407 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:57.407 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 228560a9-3076-413a-a958-40d44e7ffdb8 00:14:57.407 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:57.407 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:57.408 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 228560a9-3076-413a-a958-40d44e7ffdb8 00:14:57.408 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:57.668 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:57.668 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4a9f2e60-465c-493c-acf4-679c7dadb00d 00:14:57.668 22:12:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 228560a9-3076-413a-a958-40d44e7ffdb8 00:14:57.948 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:57.948 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:58.208 00:14:58.208 real 0m15.332s 00:14:58.208 user 0m15.069s 00:14:58.208 sys 0m1.253s 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:58.208 ************************************ 00:14:58.208 END TEST lvs_grow_clean 00:14:58.208 ************************************ 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:58.208 ************************************ 00:14:58.208 START TEST lvs_grow_dirty 00:14:58.208 ************************************ 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:58.208 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:58.470 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:58.470 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:58.470 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=aa7429ae-7f8c-4c45-85ea-58ba16da0bdc 00:14:58.470 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7429ae-7f8c-4c45-85ea-58ba16da0bdc 00:14:58.470 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:58.771 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:58.771 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:58.771 22:12:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aa7429ae-7f8c-4c45-85ea-58ba16da0bdc lvol 150 00:14:58.771 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b029eceb-5469-447d-8c17-408a69053225 00:14:58.771 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:58.771 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:59.032 [2024-07-15 22:12:24.166616] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:59.032 [2024-07-15 22:12:24.166669] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:59.032 true 00:14:59.032 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:59.032 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7429ae-7f8c-4c45-85ea-58ba16da0bdc 00:14:59.032 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:59.032 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:59.292 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b029eceb-5469-447d-8c17-408a69053225 00:14:59.553 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:59.553 [2024-07-15 22:12:24.760443] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.553 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:59.813 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2716042 00:14:59.813 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:59.813 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:59.813 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2716042 /var/tmp/bdevperf.sock 00:14:59.813 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2716042 ']' 00:14:59.813 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:59.813 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.813 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:59.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:59.813 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.813 22:12:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:59.813 [2024-07-15 22:12:24.972920] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:14:59.813 [2024-07-15 22:12:24.972973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716042 ] 00:14:59.813 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.813 [2024-07-15 22:12:25.046603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.813 [2024-07-15 22:12:25.100706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.756 22:12:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:00.756 22:12:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:00.756 22:12:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:00.756 Nvme0n1 00:15:01.017 22:12:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:01.017 [ 00:15:01.017 { 00:15:01.017 "name": "Nvme0n1", 00:15:01.017 "aliases": [ 00:15:01.017 "b029eceb-5469-447d-8c17-408a69053225" 00:15:01.017 ], 00:15:01.017 "product_name": "NVMe disk", 00:15:01.017 "block_size": 4096, 00:15:01.017 "num_blocks": 38912, 00:15:01.017 "uuid": "b029eceb-5469-447d-8c17-408a69053225", 00:15:01.017 "assigned_rate_limits": { 00:15:01.017 "rw_ios_per_sec": 0, 00:15:01.017 "rw_mbytes_per_sec": 0, 00:15:01.017 "r_mbytes_per_sec": 0, 00:15:01.017 "w_mbytes_per_sec": 0 00:15:01.017 }, 00:15:01.017 "claimed": false, 00:15:01.017 "zoned": false, 00:15:01.017 "supported_io_types": { 00:15:01.017 "read": true, 00:15:01.017 "write": true, 00:15:01.017 "unmap": true, 00:15:01.017 "flush": true, 00:15:01.017 "reset": true, 00:15:01.017 "nvme_admin": true, 00:15:01.017 "nvme_io": true, 00:15:01.017 "nvme_io_md": false, 00:15:01.017 "write_zeroes": true, 00:15:01.017 "zcopy": false, 00:15:01.017 "get_zone_info": false, 00:15:01.017 "zone_management": false, 00:15:01.017 "zone_append": false, 00:15:01.017 "compare": true, 00:15:01.017 "compare_and_write": true, 00:15:01.017 "abort": true, 00:15:01.017 "seek_hole": false, 00:15:01.017 "seek_data": false, 00:15:01.017 "copy": true, 00:15:01.017 "nvme_iov_md": false 00:15:01.017 }, 00:15:01.017 "memory_domains": [ 00:15:01.017 { 00:15:01.017 "dma_device_id": "system", 00:15:01.017 "dma_device_type": 1 00:15:01.017 } 00:15:01.017 ], 00:15:01.017 "driver_specific": { 00:15:01.017 "nvme": [ 00:15:01.017 { 00:15:01.017 "trid": { 00:15:01.017 "trtype": "TCP", 00:15:01.017 "adrfam": "IPv4", 00:15:01.017 "traddr": "10.0.0.2", 00:15:01.017 "trsvcid": "4420", 00:15:01.017 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:01.017 }, 00:15:01.017 "ctrlr_data": { 00:15:01.017 "cntlid": 1, 00:15:01.017 "vendor_id": "0x8086", 00:15:01.017 "model_number": "SPDK bdev Controller", 00:15:01.017 "serial_number": "SPDK0", 00:15:01.017 "firmware_revision": "24.09", 00:15:01.017 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:01.017 "oacs": { 00:15:01.017 "security": 0, 00:15:01.017 "format": 0, 00:15:01.017 "firmware": 0, 00:15:01.017 "ns_manage": 0 00:15:01.017 }, 00:15:01.017 "multi_ctrlr": true, 00:15:01.017 "ana_reporting": false 00:15:01.017 }, 00:15:01.017 "vs": { 00:15:01.017 "nvme_version": "1.3" 00:15:01.017 }, 00:15:01.017 "ns_data": { 00:15:01.017 "id": 1, 00:15:01.017 "can_share": true 00:15:01.017 } 00:15:01.017 } 00:15:01.017 ], 00:15:01.017 "mp_policy": "active_passive" 00:15:01.017 } 00:15:01.017 } 00:15:01.017 ] 00:15:01.017 22:12:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2716256 00:15:01.017 22:12:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:01.017 22:12:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:01.017 Running I/O for 10 seconds... 00:15:02.401 Latency(us) 00:15:02.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.401 Nvme0n1 : 1.00 17653.00 68.96 0.00 0.00 0.00 0.00 0.00 00:15:02.401 =================================================================================================================== 00:15:02.401 Total : 17653.00 68.96 0.00 0.00 0.00 0.00 0.00 00:15:02.401 00:15:02.971 22:12:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aa7429ae-7f8c-4c45-85ea-58ba16da0bdc 00:15:03.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.232 Nvme0n1 : 2.00 17722.50 69.23 0.00 0.00 0.00 0.00 0.00 00:15:03.232 =================================================================================================================== 00:15:03.232 Total : 17722.50 69.23 0.00 0.00 0.00 0.00 0.00 00:15:03.232 00:15:03.232 true 00:15:03.232 22:12:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7429ae-7f8c-4c45-85ea-58ba16da0bdc 00:15:03.232 22:12:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:03.493 22:12:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:03.493 22:12:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:03.493 22:12:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2716256 00:15:04.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.062 Nvme0n1 : 3.00 17756.33 69.36 0.00 0.00 0.00 0.00 0.00 00:15:04.062 =================================================================================================================== 00:15:04.062 Total : 17756.33 69.36 0.00 0.00 0.00 0.00 0.00 00:15:04.062 00:15:05.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.444 Nvme0n1 : 4.00 17775.25 69.43 0.00 0.00 0.00 0.00 0.00 00:15:05.444 =================================================================================================================== 00:15:05.444 Total : 17775.25 69.43 0.00 0.00 0.00 0.00 0.00 00:15:05.444 00:15:06.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.390 Nvme0n1 : 5.00 17797.80 69.52 0.00 0.00 0.00 0.00 0.00 00:15:06.390 =================================================================================================================== 00:15:06.390 Total : 17797.80 69.52 0.00 0.00 0.00 0.00 0.00 00:15:06.390 00:15:07.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.330 Nvme0n1 : 6.00 17820.83 69.61 0.00 0.00 0.00 0.00 0.00 00:15:07.330 =================================================================================================================== 00:15:07.330 Total : 17820.83 69.61 0.00 0.00 0.00 0.00 0.00 00:15:07.330 00:15:08.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.270 Nvme0n1 : 7.00 17836.14 69.67 0.00 0.00 0.00 0.00 0.00 00:15:08.270 =================================================================================================================== 00:15:08.270 Total : 17836.14 69.67 0.00 0.00 0.00 0.00 0.00 00:15:08.270 00:15:09.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.211 Nvme0n1 : 8.00 17851.62 69.73 0.00 0.00 0.00 0.00 0.00 00:15:09.211 =================================================================================================================== 00:15:09.211 Total : 17851.62 69.73 0.00 0.00 0.00 0.00 0.00 00:15:09.211 00:15:10.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.153 Nvme0n1 : 9.00 17862.78 69.78 0.00 0.00 0.00 0.00 0.00 00:15:10.153 =================================================================================================================== 00:15:10.153 Total : 17862.78 69.78 0.00 0.00 0.00 0.00 0.00 00:15:10.153 00:15:11.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:11.096 Nvme0n1 : 10.00 17874.10 69.82 0.00 0.00 0.00 0.00 0.00 00:15:11.096 =================================================================================================================== 00:15:11.096 Total : 17874.10 69.82 0.00 0.00 0.00 0.00 0.00 00:15:11.096 00:15:11.096 00:15:11.096 Latency(us) 00:15:11.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:11.096 Nvme0n1 : 10.01 17874.14 69.82 0.00 0.00 7156.55 1870.51 10158.08 00:15:11.096 =================================================================================================================== 00:15:11.096 Total : 17874.14 69.82 0.00 0.00 7156.55 1870.51 10158.08 00:15:11.096 0 00:15:11.096 22:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2716042 00:15:11.096 22:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2716042 ']' 00:15:11.096 22:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2716042 00:15:11.096 22:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:11.096 22:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:11.096 22:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2716042 00:15:11.360 22:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:11.360 22:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:11.360 22:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2716042' 00:15:11.360 killing process with pid 2716042 00:15:11.360 22:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2716042 00:15:11.360 Received shutdown signal, test time was about 10.000000 seconds 00:15:11.360 00:15:11.360 Latency(us) 00:15:11.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.360 =================================================================================================================== 00:15:11.360 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:11.360 22:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2716042 00:15:11.360 22:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:11.620 22:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:11.620 22:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7429ae-7f8c-4c45-85ea-58ba16da0bdc 00:15:11.620 22:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2711810 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2711810 00:15:11.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2711810 Killed "${NVMF_APP[@]}" "$@" 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2718400 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2718400 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2718400 ']' 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:11.881 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:11.881 [2024-07-15 22:12:37.114735] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:15:11.881 [2024-07-15 22:12:37.114785] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.881 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.881 [2024-07-15 22:12:37.179741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.141 [2024-07-15 22:12:37.244757] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.141 [2024-07-15 22:12:37.244790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.141 [2024-07-15 22:12:37.244797] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.141 [2024-07-15 22:12:37.244804] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.141 [2024-07-15 22:12:37.244809] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.141 [2024-07-15 22:12:37.244827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.712 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.712 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:12.712 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:12.712 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:12.712 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:12.712 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.712 22:12:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:12.972 [2024-07-15 22:12:38.053697] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:12.972 [2024-07-15 22:12:38.053780] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:12.972 [2024-07-15 22:12:38.053808] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:12.972 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:12.972 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b029eceb-5469-447d-8c17-408a69053225 00:15:12.972 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=b029eceb-5469-447d-8c17-408a69053225 00:15:12.972 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:12.972 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:12.972 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:12.972 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:12.972 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:12.973 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b029eceb-5469-447d-8c17-408a69053225 -t 2000 00:15:13.233 [ 00:15:13.233 { 00:15:13.233 "name": "b029eceb-5469-447d-8c17-408a69053225", 00:15:13.233 "aliases": [ 00:15:13.233 "lvs/lvol" 00:15:13.233 ], 00:15:13.233 "product_name": "Logical Volume", 00:15:13.233 "block_size": 4096, 00:15:13.233 "num_blocks": 38912, 00:15:13.233 "uuid": "b029eceb-5469-447d-8c17-408a69053225", 00:15:13.233 "assigned_rate_limits": { 00:15:13.233 "rw_ios_per_sec": 0, 00:15:13.233 "rw_mbytes_per_sec": 0, 00:15:13.233 "r_mbytes_per_sec": 0, 00:15:13.233 "w_mbytes_per_sec": 0 00:15:13.233 }, 00:15:13.233 "claimed": false, 00:15:13.233 "zoned": false, 00:15:13.233 "supported_io_types": { 00:15:13.233 "read": true, 00:15:13.233 "write": true, 00:15:13.233 "unmap": true, 00:15:13.233 "flush": false, 00:15:13.233 "reset": true, 00:15:13.233 "nvme_admin": false, 00:15:13.233 "nvme_io": false, 00:15:13.233 "nvme_io_md": false, 00:15:13.233 "write_zeroes": true, 00:15:13.233 "zcopy": false, 00:15:13.233 "get_zone_info": false, 00:15:13.233 "zone_management": false, 00:15:13.233 "zone_append": false, 00:15:13.233 "compare": false, 00:15:13.233 "compare_and_write": false, 00:15:13.233 "abort": false, 00:15:13.233 "seek_hole": true, 00:15:13.233 "seek_data": true, 00:15:13.233 "copy": false, 00:15:13.233 "nvme_iov_md": false 00:15:13.233 }, 00:15:13.233 "driver_specific": { 00:15:13.233 "lvol": { 00:15:13.233 "lvol_store_uuid": "aa7429ae-7f8c-4c45-85ea-58ba16da0bdc", 00:15:13.233 "base_bdev": "aio_bdev", 00:15:13.233 "thin_provision": false, 00:15:13.233 "num_allocated_clusters": 38, 00:15:13.233 "snapshot": false, 00:15:13.233 "clone": false, 00:15:13.233 "esnap_clone": false 00:15:13.233 } 00:15:13.233 } 00:15:13.233 } 00:15:13.233 ] 00:15:13.233 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:13.233 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7429ae-7f8c-4c45-85ea-58ba16da0bdc 00:15:13.233 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:13.233 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:13.233 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7429ae-7f8c-4c45-85ea-58ba16da0bdc 00:15:13.233 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:13.494 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:13.494 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:13.755 [2024-07-15 22:12:38.821617] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7429ae-7f8c-4c45-85ea-58ba16da0bdc 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7429ae-7f8c-4c45-85ea-58ba16da0bdc 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7429ae-7f8c-4c45-85ea-58ba16da0bdc 00:15:13.755 request: 00:15:13.755 { 00:15:13.755 "uuid": "aa7429ae-7f8c-4c45-85ea-58ba16da0bdc", 00:15:13.755 "method": "bdev_lvol_get_lvstores", 00:15:13.755 "req_id": 1 00:15:13.755 } 00:15:13.755 Got JSON-RPC error response 00:15:13.755 response: 00:15:13.755 { 00:15:13.755 "code": -19, 00:15:13.755 "message": "No such device" 00:15:13.755 } 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:13.755 22:12:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:14.016 aio_bdev 00:15:14.016 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b029eceb-5469-447d-8c17-408a69053225 00:15:14.016 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=b029eceb-5469-447d-8c17-408a69053225 00:15:14.016 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:14.016 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:14.016 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:14.016 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:14.016 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:14.016 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b029eceb-5469-447d-8c17-408a69053225 -t 2000 00:15:14.277 [ 00:15:14.277 { 00:15:14.277 "name": "b029eceb-5469-447d-8c17-408a69053225", 00:15:14.277 "aliases": [ 00:15:14.277 "lvs/lvol" 00:15:14.277 ], 00:15:14.277 "product_name": "Logical Volume", 00:15:14.277 "block_size": 4096, 00:15:14.277 "num_blocks": 38912, 00:15:14.277 "uuid": "b029eceb-5469-447d-8c17-408a69053225", 00:15:14.277 "assigned_rate_limits": { 00:15:14.277 "rw_ios_per_sec": 0, 00:15:14.277 "rw_mbytes_per_sec": 0, 00:15:14.277 "r_mbytes_per_sec": 0, 00:15:14.277 "w_mbytes_per_sec": 0 00:15:14.277 }, 00:15:14.277 "claimed": false, 00:15:14.277 "zoned": false, 00:15:14.277 "supported_io_types": { 00:15:14.277 "read": true, 00:15:14.277 "write": true, 00:15:14.277 "unmap": true, 00:15:14.277 "flush": false, 00:15:14.277 "reset": true, 00:15:14.277 "nvme_admin": false, 00:15:14.277 "nvme_io": false, 00:15:14.277 "nvme_io_md": false, 00:15:14.277 "write_zeroes": true, 00:15:14.277 "zcopy": false, 00:15:14.277 "get_zone_info": false, 00:15:14.277 "zone_management": false, 00:15:14.277 "zone_append": false, 00:15:14.277 "compare": false, 00:15:14.277 "compare_and_write": false, 00:15:14.277 "abort": false, 00:15:14.277 "seek_hole": true, 00:15:14.277 "seek_data": true, 00:15:14.277 "copy": false, 00:15:14.277 "nvme_iov_md": false 00:15:14.277 }, 00:15:14.277 "driver_specific": { 00:15:14.277 "lvol": { 00:15:14.277 "lvol_store_uuid": "aa7429ae-7f8c-4c45-85ea-58ba16da0bdc", 00:15:14.277 "base_bdev": "aio_bdev", 00:15:14.277 "thin_provision": false, 00:15:14.277 "num_allocated_clusters": 38, 00:15:14.277 "snapshot": false, 00:15:14.277 "clone": false, 00:15:14.277 "esnap_clone": false 00:15:14.277 } 00:15:14.277 } 00:15:14.277 } 00:15:14.277 ] 00:15:14.277 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:14.277 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7429ae-7f8c-4c45-85ea-58ba16da0bdc 00:15:14.277 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:14.277 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:14.558 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7429ae-7f8c-4c45-85ea-58ba16da0bdc 00:15:14.558 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:14.558 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:14.558 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b029eceb-5469-447d-8c17-408a69053225 00:15:14.819 22:12:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aa7429ae-7f8c-4c45-85ea-58ba16da0bdc 00:15:14.819 22:12:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:15.100 22:12:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:15.100 00:15:15.100 real 0m16.961s 00:15:15.100 user 0m44.220s 00:15:15.100 sys 0m3.112s 00:15:15.100 22:12:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:15.100 22:12:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:15.100 ************************************ 00:15:15.100 END TEST lvs_grow_dirty 00:15:15.100 ************************************ 00:15:15.100 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:15.100 22:12:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:15.100 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:15.100 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:15.100 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:15.100 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:15.100 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:15.100 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:15.100 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:15.100 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:15.100 nvmf_trace.0 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.386 rmmod nvme_tcp 00:15:15.386 rmmod nvme_fabrics 00:15:15.386 rmmod nvme_keyring 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2718400 ']' 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2718400 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2718400 ']' 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2718400 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:15.386 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:15.387 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2718400 00:15:15.387 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:15.387 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:15.387 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2718400' 00:15:15.387 killing process with pid 2718400 00:15:15.387 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2718400 00:15:15.387 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2718400 00:15:15.387 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:15.387 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:15.387 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:15.387 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.387 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:15.387 22:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.387 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.387 22:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.935 22:12:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:17.935 00:15:17.935 real 0m43.193s 00:15:17.935 user 1m5.272s 00:15:17.935 sys 0m10.098s 00:15:17.935 22:12:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:17.935 22:12:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:17.935 ************************************ 00:15:17.935 END TEST nvmf_lvs_grow 00:15:17.935 ************************************ 00:15:17.935 22:12:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:17.935 22:12:42 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:17.935 22:12:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:17.935 22:12:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.935 22:12:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:17.935 ************************************ 00:15:17.935 START TEST nvmf_bdev_io_wait 00:15:17.935 ************************************ 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:17.935 * Looking for test storage... 00:15:17.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:17.935 22:12:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:24.524 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:24.524 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:24.524 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:24.524 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:24.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:15:24.524 00:15:24.524 --- 10.0.0.2 ping statistics --- 00:15:24.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.524 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:24.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.387 ms 00:15:24.524 00:15:24.524 --- 10.0.0.1 ping statistics --- 00:15:24.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.524 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:24.524 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2723132 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2723132 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2723132 ']' 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:24.525 22:12:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:24.525 [2024-07-15 22:12:49.793894] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:15:24.525 [2024-07-15 22:12:49.793956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.525 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.785 [2024-07-15 22:12:49.864244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:24.785 [2024-07-15 22:12:49.940193] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.785 [2024-07-15 22:12:49.940230] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.785 [2024-07-15 22:12:49.940238] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.785 [2024-07-15 22:12:49.940244] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.785 [2024-07-15 22:12:49.940249] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.785 [2024-07-15 22:12:49.940391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.785 [2024-07-15 22:12:49.940509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:24.785 [2024-07-15 22:12:49.940651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.785 [2024-07-15 22:12:49.940652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.355 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.616 [2024-07-15 22:12:50.680788] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.616 Malloc0 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.616 [2024-07-15 22:12:50.755409] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2723408 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2723410 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:25.616 { 00:15:25.616 "params": { 00:15:25.616 "name": "Nvme$subsystem", 00:15:25.616 "trtype": "$TEST_TRANSPORT", 00:15:25.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.616 "adrfam": "ipv4", 00:15:25.616 "trsvcid": "$NVMF_PORT", 00:15:25.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.616 "hdgst": ${hdgst:-false}, 00:15:25.616 "ddgst": ${ddgst:-false} 00:15:25.616 }, 00:15:25.616 "method": "bdev_nvme_attach_controller" 00:15:25.616 } 00:15:25.616 EOF 00:15:25.616 )") 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2723413 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:25.616 { 00:15:25.616 "params": { 00:15:25.616 "name": "Nvme$subsystem", 00:15:25.616 "trtype": "$TEST_TRANSPORT", 00:15:25.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.616 "adrfam": "ipv4", 00:15:25.616 "trsvcid": "$NVMF_PORT", 00:15:25.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.616 "hdgst": ${hdgst:-false}, 00:15:25.616 "ddgst": ${ddgst:-false} 00:15:25.616 }, 00:15:25.616 "method": "bdev_nvme_attach_controller" 00:15:25.616 } 00:15:25.616 EOF 00:15:25.616 )") 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2723417 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:25.616 { 00:15:25.616 "params": { 00:15:25.616 "name": "Nvme$subsystem", 00:15:25.616 "trtype": "$TEST_TRANSPORT", 00:15:25.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.616 "adrfam": "ipv4", 00:15:25.616 "trsvcid": "$NVMF_PORT", 00:15:25.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.616 "hdgst": ${hdgst:-false}, 00:15:25.616 "ddgst": ${ddgst:-false} 00:15:25.616 }, 00:15:25.616 "method": "bdev_nvme_attach_controller" 00:15:25.616 } 00:15:25.616 EOF 00:15:25.616 )") 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:25.616 { 00:15:25.616 "params": { 00:15:25.616 "name": "Nvme$subsystem", 00:15:25.616 "trtype": "$TEST_TRANSPORT", 00:15:25.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.616 "adrfam": "ipv4", 00:15:25.616 "trsvcid": "$NVMF_PORT", 00:15:25.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.616 "hdgst": ${hdgst:-false}, 00:15:25.616 "ddgst": ${ddgst:-false} 00:15:25.616 }, 00:15:25.616 "method": "bdev_nvme_attach_controller" 00:15:25.616 } 00:15:25.616 EOF 00:15:25.616 )") 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2723408 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:25.616 "params": { 00:15:25.616 "name": "Nvme1", 00:15:25.616 "trtype": "tcp", 00:15:25.616 "traddr": "10.0.0.2", 00:15:25.616 "adrfam": "ipv4", 00:15:25.616 "trsvcid": "4420", 00:15:25.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.616 "hdgst": false, 00:15:25.616 "ddgst": false 00:15:25.616 }, 00:15:25.616 "method": "bdev_nvme_attach_controller" 00:15:25.616 }' 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:25.616 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:25.616 "params": { 00:15:25.617 "name": "Nvme1", 00:15:25.617 "trtype": "tcp", 00:15:25.617 "traddr": "10.0.0.2", 00:15:25.617 "adrfam": "ipv4", 00:15:25.617 "trsvcid": "4420", 00:15:25.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.617 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.617 "hdgst": false, 00:15:25.617 "ddgst": false 00:15:25.617 }, 00:15:25.617 "method": "bdev_nvme_attach_controller" 00:15:25.617 }' 00:15:25.617 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:25.617 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:25.617 "params": { 00:15:25.617 "name": "Nvme1", 00:15:25.617 "trtype": "tcp", 00:15:25.617 "traddr": "10.0.0.2", 00:15:25.617 "adrfam": "ipv4", 00:15:25.617 "trsvcid": "4420", 00:15:25.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.617 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.617 "hdgst": false, 00:15:25.617 "ddgst": false 00:15:25.617 }, 00:15:25.617 "method": "bdev_nvme_attach_controller" 00:15:25.617 }' 00:15:25.617 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:25.617 22:12:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:25.617 "params": { 00:15:25.617 "name": "Nvme1", 00:15:25.617 "trtype": "tcp", 00:15:25.617 "traddr": "10.0.0.2", 00:15:25.617 "adrfam": "ipv4", 00:15:25.617 "trsvcid": "4420", 00:15:25.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.617 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.617 "hdgst": false, 00:15:25.617 "ddgst": false 00:15:25.617 }, 00:15:25.617 "method": "bdev_nvme_attach_controller" 00:15:25.617 }' 00:15:25.617 [2024-07-15 22:12:50.807415] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:15:25.617 [2024-07-15 22:12:50.807471] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:25.617 [2024-07-15 22:12:50.809006] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:15:25.617 [2024-07-15 22:12:50.809053] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:25.617 [2024-07-15 22:12:50.812747] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:15:25.617 [2024-07-15 22:12:50.812794] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:25.617 [2024-07-15 22:12:50.813279] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:15:25.617 [2024-07-15 22:12:50.813326] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:25.617 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.617 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.877 [2024-07-15 22:12:50.944771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.877 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.877 [2024-07-15 22:12:50.995598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:25.877 [2024-07-15 22:12:51.008605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.877 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.877 [2024-07-15 22:12:51.058149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.877 [2024-07-15 22:12:51.059859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:25.877 [2024-07-15 22:12:51.104826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.877 [2024-07-15 22:12:51.108678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:25.877 [2024-07-15 22:12:51.154692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:26.137 Running I/O for 1 seconds... 00:15:26.137 Running I/O for 1 seconds... 00:15:26.137 Running I/O for 1 seconds... 00:15:26.137 Running I/O for 1 seconds... 00:15:27.078 00:15:27.078 Latency(us) 00:15:27.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.078 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:27.078 Nvme1n1 : 1.01 11203.01 43.76 0.00 0.00 11385.82 6389.76 21189.97 00:15:27.078 =================================================================================================================== 00:15:27.078 Total : 11203.01 43.76 0.00 0.00 11385.82 6389.76 21189.97 00:15:27.078 00:15:27.078 Latency(us) 00:15:27.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.078 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:27.078 Nvme1n1 : 1.00 188521.41 736.41 0.00 0.00 676.60 271.36 778.24 00:15:27.078 =================================================================================================================== 00:15:27.078 Total : 188521.41 736.41 0.00 0.00 676.60 271.36 778.24 00:15:27.078 00:15:27.078 Latency(us) 00:15:27.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.078 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:27.078 Nvme1n1 : 1.00 17250.73 67.39 0.00 0.00 7399.20 2689.71 13434.88 00:15:27.078 =================================================================================================================== 00:15:27.078 Total : 17250.73 67.39 0.00 0.00 7399.20 2689.71 13434.88 00:15:27.338 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2723410 00:15:27.338 00:15:27.338 Latency(us) 00:15:27.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.338 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:27.338 Nvme1n1 : 1.00 13077.61 51.08 0.00 0.00 9759.36 4724.05 19988.48 00:15:27.338 =================================================================================================================== 00:15:27.338 Total : 13077.61 51.08 0.00 0.00 9759.36 4724.05 19988.48 00:15:27.338 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2723413 00:15:27.338 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2723417 00:15:27.338 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.338 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.338 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:27.338 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.338 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:27.338 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:27.338 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:27.338 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:27.338 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.338 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:27.338 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.339 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.339 rmmod nvme_tcp 00:15:27.339 rmmod nvme_fabrics 00:15:27.339 rmmod nvme_keyring 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2723132 ']' 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2723132 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2723132 ']' 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2723132 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2723132 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2723132' 00:15:27.599 killing process with pid 2723132 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2723132 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2723132 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.599 22:12:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.143 22:12:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:30.143 00:15:30.143 real 0m12.111s 00:15:30.143 user 0m18.881s 00:15:30.143 sys 0m6.502s 00:15:30.143 22:12:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:30.143 22:12:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:30.143 ************************************ 00:15:30.143 END TEST nvmf_bdev_io_wait 00:15:30.143 ************************************ 00:15:30.143 22:12:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:30.143 22:12:54 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:30.143 22:12:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:30.143 22:12:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:30.143 22:12:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:30.143 ************************************ 00:15:30.143 START TEST nvmf_queue_depth 00:15:30.143 ************************************ 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:30.143 * Looking for test storage... 00:15:30.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.143 22:12:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:30.144 22:12:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:36.734 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:36.734 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:36.734 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:36.734 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:36.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:15:36.734 00:15:36.734 --- 10.0.0.2 ping statistics --- 00:15:36.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.734 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:15:36.734 00:15:36.734 --- 10.0.0.1 ping statistics --- 00:15:36.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.734 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:36.734 22:13:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:36.734 22:13:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:36.734 22:13:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:36.735 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:36.735 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:36.735 22:13:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2727839 00:15:36.735 22:13:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2727839 00:15:36.735 22:13:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:36.735 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2727839 ']' 00:15:36.735 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.735 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:36.735 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.735 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:36.735 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:36.997 [2024-07-15 22:13:02.078146] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:15:36.997 [2024-07-15 22:13:02.078196] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.997 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.997 [2024-07-15 22:13:02.160170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.997 [2024-07-15 22:13:02.234859] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.997 [2024-07-15 22:13:02.234914] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.997 [2024-07-15 22:13:02.234922] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.997 [2024-07-15 22:13:02.234930] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.997 [2024-07-15 22:13:02.234936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.997 [2024-07-15 22:13:02.234961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.571 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:37.571 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:37.571 22:13:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:37.571 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:37.571 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:37.571 22:13:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.571 22:13:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:37.571 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.571 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:37.832 [2024-07-15 22:13:02.899456] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:37.832 Malloc0 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:37.832 [2024-07-15 22:13:02.958527] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2728034 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2728034 /var/tmp/bdevperf.sock 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2728034 ']' 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:37.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.832 22:13:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:37.832 [2024-07-15 22:13:03.019769] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:15:37.832 [2024-07-15 22:13:03.019875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728034 ] 00:15:37.832 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.832 [2024-07-15 22:13:03.082705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.832 [2024-07-15 22:13:03.148546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.773 22:13:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.773 22:13:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:38.773 22:13:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:38.773 22:13:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.773 22:13:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:38.773 NVMe0n1 00:15:38.773 22:13:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.773 22:13:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:38.773 Running I/O for 10 seconds... 00:15:48.800 00:15:48.800 Latency(us) 00:15:48.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.800 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:48.800 Verification LBA range: start 0x0 length 0x4000 00:15:48.800 NVMe0n1 : 10.04 11442.91 44.70 0.00 0.00 89159.32 7099.73 69468.16 00:15:48.800 =================================================================================================================== 00:15:48.800 Total : 11442.91 44.70 0.00 0.00 89159.32 7099.73 69468.16 00:15:48.800 0 00:15:48.800 22:13:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2728034 00:15:48.800 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2728034 ']' 00:15:48.800 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2728034 00:15:48.800 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:48.800 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:48.800 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2728034 00:15:48.800 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:48.800 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:48.800 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2728034' 00:15:48.800 killing process with pid 2728034 00:15:48.800 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2728034 00:15:48.800 Received shutdown signal, test time was about 10.000000 seconds 00:15:48.800 00:15:48.800 Latency(us) 00:15:48.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.800 =================================================================================================================== 00:15:48.800 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:48.800 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2728034 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:49.060 rmmod nvme_tcp 00:15:49.060 rmmod nvme_fabrics 00:15:49.060 rmmod nvme_keyring 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2727839 ']' 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2727839 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2727839 ']' 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2727839 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2727839 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2727839' 00:15:49.060 killing process with pid 2727839 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2727839 00:15:49.060 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2727839 00:15:49.322 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:49.322 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:49.322 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:49.322 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:49.322 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:49.322 22:13:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.322 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.322 22:13:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.236 22:13:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:51.236 00:15:51.236 real 0m21.500s 00:15:51.236 user 0m25.203s 00:15:51.236 sys 0m6.249s 00:15:51.236 22:13:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:51.236 22:13:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:51.236 ************************************ 00:15:51.236 END TEST nvmf_queue_depth 00:15:51.236 ************************************ 00:15:51.236 22:13:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:51.236 22:13:16 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:51.236 22:13:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:51.236 22:13:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:51.236 22:13:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:51.498 ************************************ 00:15:51.498 START TEST nvmf_target_multipath 00:15:51.498 ************************************ 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:51.498 * Looking for test storage... 00:15:51.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:51.498 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:51.499 22:13:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:59.650 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:59.650 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:59.650 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:59.650 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:59.650 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:59.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:15:59.651 00:15:59.651 --- 10.0.0.2 ping statistics --- 00:15:59.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.651 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:59.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:15:59.651 00:15:59.651 --- 10.0.0.1 ping statistics --- 00:15:59.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.651 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:59.651 only one NIC for nvmf test 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:59.651 rmmod nvme_tcp 00:15:59.651 rmmod nvme_fabrics 00:15:59.651 rmmod nvme_keyring 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.651 22:13:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:01.038 00:16:01.038 real 0m9.395s 00:16:01.038 user 0m2.016s 00:16:01.038 sys 0m5.267s 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:01.038 22:13:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:01.038 ************************************ 00:16:01.038 END TEST nvmf_target_multipath 00:16:01.038 ************************************ 00:16:01.038 22:13:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:01.039 22:13:26 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:01.039 22:13:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:01.039 22:13:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.039 22:13:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:01.039 ************************************ 00:16:01.039 START TEST nvmf_zcopy 00:16:01.039 ************************************ 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:01.039 * Looking for test storage... 00:16:01.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:01.039 22:13:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:07.630 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:07.630 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:07.630 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:07.630 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.630 22:13:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:07.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:16:07.891 00:16:07.891 --- 10.0.0.2 ping statistics --- 00:16:07.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.891 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:07.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.436 ms 00:16:07.891 00:16:07.891 --- 10.0.0.1 ping statistics --- 00:16:07.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.891 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2738509 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2738509 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:07.891 22:13:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2738509 ']' 00:16:08.151 22:13:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.151 22:13:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.151 22:13:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.151 22:13:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.151 22:13:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:08.151 [2024-07-15 22:13:33.270516] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:16:08.151 [2024-07-15 22:13:33.270580] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.151 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.151 [2024-07-15 22:13:33.359973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.151 [2024-07-15 22:13:33.453202] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.151 [2024-07-15 22:13:33.453250] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.151 [2024-07-15 22:13:33.453258] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.151 [2024-07-15 22:13:33.453265] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.152 [2024-07-15 22:13:33.453271] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.152 [2024-07-15 22:13:33.453306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.749 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.749 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:08.749 22:13:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:08.749 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:08.749 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.010 [2024-07-15 22:13:34.110859] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.010 [2024-07-15 22:13:34.135059] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.010 malloc0 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:09.010 { 00:16:09.010 "params": { 00:16:09.010 "name": "Nvme$subsystem", 00:16:09.010 "trtype": "$TEST_TRANSPORT", 00:16:09.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:09.010 "adrfam": "ipv4", 00:16:09.010 "trsvcid": "$NVMF_PORT", 00:16:09.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:09.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:09.010 "hdgst": ${hdgst:-false}, 00:16:09.010 "ddgst": ${ddgst:-false} 00:16:09.010 }, 00:16:09.010 "method": "bdev_nvme_attach_controller" 00:16:09.010 } 00:16:09.010 EOF 00:16:09.010 )") 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:09.010 22:13:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:09.010 "params": { 00:16:09.010 "name": "Nvme1", 00:16:09.010 "trtype": "tcp", 00:16:09.010 "traddr": "10.0.0.2", 00:16:09.010 "adrfam": "ipv4", 00:16:09.010 "trsvcid": "4420", 00:16:09.010 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.010 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:09.010 "hdgst": false, 00:16:09.010 "ddgst": false 00:16:09.010 }, 00:16:09.010 "method": "bdev_nvme_attach_controller" 00:16:09.010 }' 00:16:09.010 [2024-07-15 22:13:34.235185] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:16:09.010 [2024-07-15 22:13:34.235249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2738623 ] 00:16:09.010 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.010 [2024-07-15 22:13:34.298903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.271 [2024-07-15 22:13:34.373691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.531 Running I/O for 10 seconds... 00:16:19.567 00:16:19.567 Latency(us) 00:16:19.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.567 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:19.567 Verification LBA range: start 0x0 length 0x1000 00:16:19.567 Nvme1n1 : 10.01 9003.47 70.34 0.00 0.00 14165.33 2007.04 37792.43 00:16:19.567 =================================================================================================================== 00:16:19.567 Total : 9003.47 70.34 0.00 0.00 14165.33 2007.04 37792.43 00:16:19.567 22:13:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2740756 00:16:19.567 22:13:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:19.567 22:13:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.567 22:13:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:19.567 22:13:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:19.567 22:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:19.567 22:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:19.567 22:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:19.567 22:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:19.567 { 00:16:19.567 "params": { 00:16:19.567 "name": "Nvme$subsystem", 00:16:19.567 "trtype": "$TEST_TRANSPORT", 00:16:19.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:19.567 "adrfam": "ipv4", 00:16:19.567 "trsvcid": "$NVMF_PORT", 00:16:19.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:19.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:19.567 "hdgst": ${hdgst:-false}, 00:16:19.567 "ddgst": ${ddgst:-false} 00:16:19.567 }, 00:16:19.567 "method": "bdev_nvme_attach_controller" 00:16:19.567 } 00:16:19.567 EOF 00:16:19.567 )") 00:16:19.567 [2024-07-15 22:13:44.815303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.567 [2024-07-15 22:13:44.815329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.567 22:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:19.567 22:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:19.568 22:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:19.568 22:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:19.568 "params": { 00:16:19.568 "name": "Nvme1", 00:16:19.568 "trtype": "tcp", 00:16:19.568 "traddr": "10.0.0.2", 00:16:19.568 "adrfam": "ipv4", 00:16:19.568 "trsvcid": "4420", 00:16:19.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:19.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:19.568 "hdgst": false, 00:16:19.568 "ddgst": false 00:16:19.568 }, 00:16:19.568 "method": "bdev_nvme_attach_controller" 00:16:19.568 }' 00:16:19.568 [2024-07-15 22:13:44.827305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.568 [2024-07-15 22:13:44.827314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.568 [2024-07-15 22:13:44.839335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.568 [2024-07-15 22:13:44.839342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.568 [2024-07-15 22:13:44.851367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.568 [2024-07-15 22:13:44.851373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.568 [2024-07-15 22:13:44.856661] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:16:19.568 [2024-07-15 22:13:44.856722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2740756 ] 00:16:19.568 [2024-07-15 22:13:44.863397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.568 [2024-07-15 22:13:44.863404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.568 [2024-07-15 22:13:44.875428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.568 [2024-07-15 22:13:44.875435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.568 [2024-07-15 22:13:44.887458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.568 [2024-07-15 22:13:44.887465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.568 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.828 [2024-07-15 22:13:44.899489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.828 [2024-07-15 22:13:44.899497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:44.911522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:44.911533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:44.922075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.829 [2024-07-15 22:13:44.923552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:44.923559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:44.935583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:44.935591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:44.947612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:44.947620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:44.959645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:44.959657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:44.971673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:44.971683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:44.983703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:44.983712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:44.986949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.829 [2024-07-15 22:13:44.995732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:44.995740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:45.007768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:45.007781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:45.019796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:45.019805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:45.031826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:45.031834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:45.043857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:45.043864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:45.055888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:45.055894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:45.067931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:45.067944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:45.079955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:45.079964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:45.091989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:45.091998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:45.104021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:45.104030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.829 [2024-07-15 22:13:45.116050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.829 [2024-07-15 22:13:45.116057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.161714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.161730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.172204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.172212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 Running I/O for 5 seconds... 00:16:20.090 [2024-07-15 22:13:45.186691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.186707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.201411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.201427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.213824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.213840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.226842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.226857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.240242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.240257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.253719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.253733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.267174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.267189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.280611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.280626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.294006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.294021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.307648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.307663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.320458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.320472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.333792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.333806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.346480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.346494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.359619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.359634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.372314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.372328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.385153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.385168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.398120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.398138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.090 [2024-07-15 22:13:45.411080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.090 [2024-07-15 22:13:45.411101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.351 [2024-07-15 22:13:45.423698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.423712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.436623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.436638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.450069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.450083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.463304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.463317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.475788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.475803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.489426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.489441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.502720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.502735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.515787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.515801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.528661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.528675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.541194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.541208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.554327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.554341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.567883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.567897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.580860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.580875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.594191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.594206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.607569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.607584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.620038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.620052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.634132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.634146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.646809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.646824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.659950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.659968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.352 [2024-07-15 22:13:45.672725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.352 [2024-07-15 22:13:45.672740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.685590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.685605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.699177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.699191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.712676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.712691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.725955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.725969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.738413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.738428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.750976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.750990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.764194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.764209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.777570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.777584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.790213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.790228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.803541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.803556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.816970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.816984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.829563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.829577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.841668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.841682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.854577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.854591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.867333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.867347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.880745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.880759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.894088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.894102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.907488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.907504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.919936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.919951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.614 [2024-07-15 22:13:45.933145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.614 [2024-07-15 22:13:45.933160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:45.946462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:45.946476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:45.959928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:45.959942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:45.973133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:45.973148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:45.986235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:45.986250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:45.999703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:45.999718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:46.012900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:46.012915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:46.026478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:46.026493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:46.039902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:46.039918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:46.053498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:46.053513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:46.066755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:46.066769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:46.079866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:46.079882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:46.092658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:46.092673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:46.105230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:46.105245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:46.117892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:46.117906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:46.130633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:46.130648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:46.143656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:46.143671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:46.157115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:46.157134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:46.169708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:46.169722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:46.182663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:46.182677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.876 [2024-07-15 22:13:46.196005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.876 [2024-07-15 22:13:46.196019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.137 [2024-07-15 22:13:46.209044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.137 [2024-07-15 22:13:46.209059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.137 [2024-07-15 22:13:46.221539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.137 [2024-07-15 22:13:46.221553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.137 [2024-07-15 22:13:46.234422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.137 [2024-07-15 22:13:46.234437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.137 [2024-07-15 22:13:46.247041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.137 [2024-07-15 22:13:46.247055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.137 [2024-07-15 22:13:46.260085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.137 [2024-07-15 22:13:46.260101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.137 [2024-07-15 22:13:46.273008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.137 [2024-07-15 22:13:46.273023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.137 [2024-07-15 22:13:46.286038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.137 [2024-07-15 22:13:46.286053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.137 [2024-07-15 22:13:46.299330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.138 [2024-07-15 22:13:46.299345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.138 [2024-07-15 22:13:46.311648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.138 [2024-07-15 22:13:46.311663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.138 [2024-07-15 22:13:46.325082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.138 [2024-07-15 22:13:46.325097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.138 [2024-07-15 22:13:46.337564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.138 [2024-07-15 22:13:46.337578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.138 [2024-07-15 22:13:46.349961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.138 [2024-07-15 22:13:46.349976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.138 [2024-07-15 22:13:46.363591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.138 [2024-07-15 22:13:46.363606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.138 [2024-07-15 22:13:46.376783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.138 [2024-07-15 22:13:46.376798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.138 [2024-07-15 22:13:46.389855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.138 [2024-07-15 22:13:46.389870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.138 [2024-07-15 22:13:46.402981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.138 [2024-07-15 22:13:46.402996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.138 [2024-07-15 22:13:46.416280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.138 [2024-07-15 22:13:46.416295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.138 [2024-07-15 22:13:46.429480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.138 [2024-07-15 22:13:46.429495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.138 [2024-07-15 22:13:46.442660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.138 [2024-07-15 22:13:46.442676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.138 [2024-07-15 22:13:46.455859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.138 [2024-07-15 22:13:46.455874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.398 [2024-07-15 22:13:46.468874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.398 [2024-07-15 22:13:46.468889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.398 [2024-07-15 22:13:46.481668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.398 [2024-07-15 22:13:46.481683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.398 [2024-07-15 22:13:46.494396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.398 [2024-07-15 22:13:46.494411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.398 [2024-07-15 22:13:46.506709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.398 [2024-07-15 22:13:46.506724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.398 [2024-07-15 22:13:46.519800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.398 [2024-07-15 22:13:46.519814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.398 [2024-07-15 22:13:46.532690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.398 [2024-07-15 22:13:46.532705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.398 [2024-07-15 22:13:46.546075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.398 [2024-07-15 22:13:46.546090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.398 [2024-07-15 22:13:46.559568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.398 [2024-07-15 22:13:46.559583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.398 [2024-07-15 22:13:46.572686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.398 [2024-07-15 22:13:46.572700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.398 [2024-07-15 22:13:46.585771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.398 [2024-07-15 22:13:46.585786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.399 [2024-07-15 22:13:46.598388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.399 [2024-07-15 22:13:46.598402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.399 [2024-07-15 22:13:46.611533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.399 [2024-07-15 22:13:46.611548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.399 [2024-07-15 22:13:46.624811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.399 [2024-07-15 22:13:46.624826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.399 [2024-07-15 22:13:46.638352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.399 [2024-07-15 22:13:46.638366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.399 [2024-07-15 22:13:46.651503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.399 [2024-07-15 22:13:46.651518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.399 [2024-07-15 22:13:46.664640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.399 [2024-07-15 22:13:46.664654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.399 [2024-07-15 22:13:46.677610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.399 [2024-07-15 22:13:46.677625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.399 [2024-07-15 22:13:46.690681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.399 [2024-07-15 22:13:46.690696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.399 [2024-07-15 22:13:46.704095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.399 [2024-07-15 22:13:46.704109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.399 [2024-07-15 22:13:46.717173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.399 [2024-07-15 22:13:46.717187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.730539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.659 [2024-07-15 22:13:46.730554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.743735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.659 [2024-07-15 22:13:46.743750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.756244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.659 [2024-07-15 22:13:46.756258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.769232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.659 [2024-07-15 22:13:46.769247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.782233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.659 [2024-07-15 22:13:46.782248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.794887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.659 [2024-07-15 22:13:46.794901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.807736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.659 [2024-07-15 22:13:46.807751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.820781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.659 [2024-07-15 22:13:46.820795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.834044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.659 [2024-07-15 22:13:46.834059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.847614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.659 [2024-07-15 22:13:46.847629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.860571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.659 [2024-07-15 22:13:46.860586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.873833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.659 [2024-07-15 22:13:46.873848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.886645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.659 [2024-07-15 22:13:46.886663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.899759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.659 [2024-07-15 22:13:46.899773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.913059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.659 [2024-07-15 22:13:46.913073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.926104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.659 [2024-07-15 22:13:46.926118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.659 [2024-07-15 22:13:46.939318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.660 [2024-07-15 22:13:46.939332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.660 [2024-07-15 22:13:46.952834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.660 [2024-07-15 22:13:46.952849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.660 [2024-07-15 22:13:46.966186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.660 [2024-07-15 22:13:46.966201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.660 [2024-07-15 22:13:46.979615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.660 [2024-07-15 22:13:46.979629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:46.993165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:46.993180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.005919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.005934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.019219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.019233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.032747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.032762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.046236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.046251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.059509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.059523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.072359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.072373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.085985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.086000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.099759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.099774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.113148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.113163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.126388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.126403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.139598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.139617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.152745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.152759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.165646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.165660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.178902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.178916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.191311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.191326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.204510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.204524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.217931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.217945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.231014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.231028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.920 [2024-07-15 22:13:47.244303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.920 [2024-07-15 22:13:47.244317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.181 [2024-07-15 22:13:47.257725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.181 [2024-07-15 22:13:47.257739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.181 [2024-07-15 22:13:47.270674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.181 [2024-07-15 22:13:47.270689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.181 [2024-07-15 22:13:47.283775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.181 [2024-07-15 22:13:47.283790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.181 [2024-07-15 22:13:47.296389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.181 [2024-07-15 22:13:47.296403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.181 [2024-07-15 22:13:47.309465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.181 [2024-07-15 22:13:47.309479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.181 [2024-07-15 22:13:47.322703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.181 [2024-07-15 22:13:47.322717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.181 [2024-07-15 22:13:47.335922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.181 [2024-07-15 22:13:47.335936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.181 [2024-07-15 22:13:47.348423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.181 [2024-07-15 22:13:47.348437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.181 [2024-07-15 22:13:47.361086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.181 [2024-07-15 22:13:47.361101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.181 [2024-07-15 22:13:47.374589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.181 [2024-07-15 22:13:47.374603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.181 [2024-07-15 22:13:47.387319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.181 [2024-07-15 22:13:47.387337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.181 [2024-07-15 22:13:47.400260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.181 [2024-07-15 22:13:47.400275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.182 [2024-07-15 22:13:47.412621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.182 [2024-07-15 22:13:47.412635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.182 [2024-07-15 22:13:47.426024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.182 [2024-07-15 22:13:47.426039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.182 [2024-07-15 22:13:47.438573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.182 [2024-07-15 22:13:47.438587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.182 [2024-07-15 22:13:47.451619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.182 [2024-07-15 22:13:47.451634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.182 [2024-07-15 22:13:47.464957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.182 [2024-07-15 22:13:47.464971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.182 [2024-07-15 22:13:47.477881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.182 [2024-07-15 22:13:47.477895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.182 [2024-07-15 22:13:47.490584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.182 [2024-07-15 22:13:47.490598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.182 [2024-07-15 22:13:47.503605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.182 [2024-07-15 22:13:47.503620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.516700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.516714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.530213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.530228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.543693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.543708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.557173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.557187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.570216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.570231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.583790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.583805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.596898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.596913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.610536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.610550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.623952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.623966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.636681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.636699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.649996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.650012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.663226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.663248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.676984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.676999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.690332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.690347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.703488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.703502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.716433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.716448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.728950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.728964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.741957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.741971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.442 [2024-07-15 22:13:47.755703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.442 [2024-07-15 22:13:47.755718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.768946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.768960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.781991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.782005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.794713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.794728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.808110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.808129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.821042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.821057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.834332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.834347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.847334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.847349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.860396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.860410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.874125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.874141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.887178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.887196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.900141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.900156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.912949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.912963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.926087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.926102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.938786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.938801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.951303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.951318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.964143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.964158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.977299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.977314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:47.990542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:47.990557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:48.003187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:48.003201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.703 [2024-07-15 22:13:48.016525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.703 [2024-07-15 22:13:48.016540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.029074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.029089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.042405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.042419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.055902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.055917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.068877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.068892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.081796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.081810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.095104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.095118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.108271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.108285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.120692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.120707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.133439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.133454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.146774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.146788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.160230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.160245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.172784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.172799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.185915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.185930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.199148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.199164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.212459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.212474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.226023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.226039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.238810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.238824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.252131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.252146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.265493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.265507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.279276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.279291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.291753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.291768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.304495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.304510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.317601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.317615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.330788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.330803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.023 [2024-07-15 22:13:48.343599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.023 [2024-07-15 22:13:48.343614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.283 [2024-07-15 22:13:48.356560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.283 [2024-07-15 22:13:48.356575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.283 [2024-07-15 22:13:48.369534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.283 [2024-07-15 22:13:48.369548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.283 [2024-07-15 22:13:48.382801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.283 [2024-07-15 22:13:48.382816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.283 [2024-07-15 22:13:48.396294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.283 [2024-07-15 22:13:48.396309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.283 [2024-07-15 22:13:48.408939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.283 [2024-07-15 22:13:48.408954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.283 [2024-07-15 22:13:48.422177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.283 [2024-07-15 22:13:48.422193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.283 [2024-07-15 22:13:48.435228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.283 [2024-07-15 22:13:48.435243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.283 [2024-07-15 22:13:48.447928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.283 [2024-07-15 22:13:48.447943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.284 [2024-07-15 22:13:48.460911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.284 [2024-07-15 22:13:48.460925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.284 [2024-07-15 22:13:48.473629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.284 [2024-07-15 22:13:48.473644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.284 [2024-07-15 22:13:48.486497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.284 [2024-07-15 22:13:48.486511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.284 [2024-07-15 22:13:48.499560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.284 [2024-07-15 22:13:48.499574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.284 [2024-07-15 22:13:48.513072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.284 [2024-07-15 22:13:48.513086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.284 [2024-07-15 22:13:48.525521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.284 [2024-07-15 22:13:48.525535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.284 [2024-07-15 22:13:48.538841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.284 [2024-07-15 22:13:48.538855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.284 [2024-07-15 22:13:48.552281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.284 [2024-07-15 22:13:48.552296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.284 [2024-07-15 22:13:48.565604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.284 [2024-07-15 22:13:48.565619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.284 [2024-07-15 22:13:48.578918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.284 [2024-07-15 22:13:48.578933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.284 [2024-07-15 22:13:48.592280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.284 [2024-07-15 22:13:48.592294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.284 [2024-07-15 22:13:48.605542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.284 [2024-07-15 22:13:48.605556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.618143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.618157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.631394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.631408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.644649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.644663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.658128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.658143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.671089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.671103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.684305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.684320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.697395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.697409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.710236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.710251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.723264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.723278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.736480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.736495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.749661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.749676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.762997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.763011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.776132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.776147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.789098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.789112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.802388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.802403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.815402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.815417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.828531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.828546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.841934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.841948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.855235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.855249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 22:13:48.868188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 22:13:48.868207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.804 [2024-07-15 22:13:48.881309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.804 [2024-07-15 22:13:48.881324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.804 [2024-07-15 22:13:48.894531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.804 [2024-07-15 22:13:48.894546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.804 [2024-07-15 22:13:48.907798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.804 [2024-07-15 22:13:48.907813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.804 [2024-07-15 22:13:48.920691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.804 [2024-07-15 22:13:48.920705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.804 [2024-07-15 22:13:48.934002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.804 [2024-07-15 22:13:48.934016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.804 [2024-07-15 22:13:48.947134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.804 [2024-07-15 22:13:48.947149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.804 [2024-07-15 22:13:48.960443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.804 [2024-07-15 22:13:48.960457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.804 [2024-07-15 22:13:48.973898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.804 [2024-07-15 22:13:48.973913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.804 [2024-07-15 22:13:48.986906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.804 [2024-07-15 22:13:48.986921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 22:13:49.000125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 22:13:49.000140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 22:13:49.013163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 22:13:49.013178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 22:13:49.026614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 22:13:49.026628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 22:13:49.038778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 22:13:49.038793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 22:13:49.052493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 22:13:49.052507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 22:13:49.065809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 22:13:49.065824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 22:13:49.079135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 22:13:49.079150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 22:13:49.091949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 22:13:49.091964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 22:13:49.105035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 22:13:49.105049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 22:13:49.118489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 22:13:49.118510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.131959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.131973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.145063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.145079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.158345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.158360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.171627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.171641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.184704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.184718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.198217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.198231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.211108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.211126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.223972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.223986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.237633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.237648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.250472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.250486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.263637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.263652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.276484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.276499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.288559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.288574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.301877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.301891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.314937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.314952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.327735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.327749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.339765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.339779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.352608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.352622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.365051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.365070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 [2024-07-15 22:13:49.378642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.065 [2024-07-15 22:13:49.378657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.325 [2024-07-15 22:13:49.391228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.325 [2024-07-15 22:13:49.391242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.325 [2024-07-15 22:13:49.404486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.325 [2024-07-15 22:13:49.404500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.325 [2024-07-15 22:13:49.417724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.325 [2024-07-15 22:13:49.417738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.431292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.431307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.445169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.445183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.457755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.457769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.470208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.470223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.483491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.483506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.496143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.496157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.508925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.508940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.522204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.522219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.535613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.535628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.549093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.549108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.561987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.562001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.575147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.575162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.588490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.588505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.601615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.601629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.614530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.614549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.627739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.627754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.326 [2024-07-15 22:13:49.640555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.326 [2024-07-15 22:13:49.640569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.653397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.653412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.667093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.667108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.679852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.679867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.693406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.693421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.705953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.705968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.719385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.719400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.732568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.732582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.746208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.746222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.759029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.759044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.771430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.771444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.784505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.784519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.797999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.798014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.810595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.810610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.823910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.823925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.836037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.836052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.849008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.849023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.861773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.861792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.874706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.874720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.888153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.888168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.587 [2024-07-15 22:13:49.901568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.587 [2024-07-15 22:13:49.901583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:49.914334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:49.914348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:49.927553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:49.927567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:49.940678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:49.940693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:49.954262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:49.954277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:49.967028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:49.967042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:49.980093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:49.980108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:49.993398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:49.993412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:50.006261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:50.006278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:50.019445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:50.019463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:50.032512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:50.032528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:50.048866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:50.048895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:50.061661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:50.061678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:50.075345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:50.075361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:50.088704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:50.088720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:50.101916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:50.101933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:50.115265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:50.115282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:50.127832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:50.127848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:50.141736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:50.141752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:50.154509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:50.154525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.849 [2024-07-15 22:13:50.168041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.849 [2024-07-15 22:13:50.168057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.110 [2024-07-15 22:13:50.181513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.110 [2024-07-15 22:13:50.181528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.110 00:16:25.111 Latency(us) 00:16:25.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.111 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:25.111 Nvme1n1 : 5.00 19467.19 152.09 0.00 0.00 6568.85 2539.52 18896.21 00:16:25.111 =================================================================================================================== 00:16:25.111 Total : 19467.19 152.09 0.00 0.00 6568.85 2539.52 18896.21 00:16:25.111 [2024-07-15 22:13:50.191077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.111 [2024-07-15 22:13:50.191090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.111 [2024-07-15 22:13:50.203103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.111 [2024-07-15 22:13:50.203114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.111 [2024-07-15 22:13:50.215141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.111 [2024-07-15 22:13:50.215149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.111 [2024-07-15 22:13:50.227170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.111 [2024-07-15 22:13:50.227181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.111 [2024-07-15 22:13:50.239196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.111 [2024-07-15 22:13:50.239207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.111 [2024-07-15 22:13:50.251222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.111 [2024-07-15 22:13:50.251232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.111 [2024-07-15 22:13:50.263265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.111 [2024-07-15 22:13:50.263272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.111 [2024-07-15 22:13:50.275284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.111 [2024-07-15 22:13:50.275292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.111 [2024-07-15 22:13:50.287317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.111 [2024-07-15 22:13:50.287327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.111 [2024-07-15 22:13:50.299348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.111 [2024-07-15 22:13:50.299356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.111 [2024-07-15 22:13:50.311377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.111 [2024-07-15 22:13:50.311384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.111 [2024-07-15 22:13:50.323408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.111 [2024-07-15 22:13:50.323417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2740756) - No such process 00:16:25.111 22:13:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2740756 00:16:25.111 22:13:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.111 22:13:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.111 22:13:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:25.111 22:13:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.111 22:13:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:25.111 22:13:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.111 22:13:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:25.111 delay0 00:16:25.111 22:13:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.111 22:13:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:25.111 22:13:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.111 22:13:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:25.111 22:13:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.111 22:13:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:25.111 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.371 [2024-07-15 22:13:50.466752] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:31.949 Initializing NVMe Controllers 00:16:31.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:31.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:31.949 Initialization complete. Launching workers. 00:16:31.949 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 186 00:16:31.949 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 473, failed to submit 33 00:16:31.949 success 305, unsuccess 168, failed 0 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:31.949 rmmod nvme_tcp 00:16:31.949 rmmod nvme_fabrics 00:16:31.949 rmmod nvme_keyring 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2738509 ']' 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2738509 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2738509 ']' 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2738509 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:31.949 22:13:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2738509 00:16:31.950 22:13:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:31.950 22:13:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:31.950 22:13:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2738509' 00:16:31.950 killing process with pid 2738509 00:16:31.950 22:13:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2738509 00:16:31.950 22:13:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2738509 00:16:31.950 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:31.950 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:31.950 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:31.950 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:31.950 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:31.950 22:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.950 22:13:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.950 22:13:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.865 22:13:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:33.865 00:16:33.865 real 0m32.836s 00:16:33.865 user 0m44.737s 00:16:33.865 sys 0m9.770s 00:16:33.865 22:13:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:33.865 22:13:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:33.865 ************************************ 00:16:33.865 END TEST nvmf_zcopy 00:16:33.865 ************************************ 00:16:33.865 22:13:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:33.865 22:13:58 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:33.865 22:13:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:33.865 22:13:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.865 22:13:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:33.865 ************************************ 00:16:33.865 START TEST nvmf_nmic 00:16:33.865 ************************************ 00:16:33.865 22:13:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:33.865 * Looking for test storage... 00:16:33.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:33.865 22:13:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:40.486 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:40.486 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:40.486 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:40.486 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.486 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.747 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.747 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.747 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:40.747 22:14:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.747 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.747 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.747 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:40.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:16:40.747 00:16:40.747 --- 10.0.0.2 ping statistics --- 00:16:40.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.747 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:16:40.747 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:16:41.008 00:16:41.008 --- 10.0.0.1 ping statistics --- 00:16:41.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.008 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2747214 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2747214 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:41.008 22:14:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2747214 ']' 00:16:41.009 22:14:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.009 22:14:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.009 22:14:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.009 22:14:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.009 22:14:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:41.009 [2024-07-15 22:14:06.197135] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:16:41.009 [2024-07-15 22:14:06.197185] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.009 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.009 [2024-07-15 22:14:06.281033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.270 [2024-07-15 22:14:06.354022] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.270 [2024-07-15 22:14:06.354060] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.270 [2024-07-15 22:14:06.354067] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.270 [2024-07-15 22:14:06.354073] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.270 [2024-07-15 22:14:06.354078] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.270 [2024-07-15 22:14:06.354149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.270 [2024-07-15 22:14:06.354375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.270 [2024-07-15 22:14:06.354376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.270 [2024-07-15 22:14:06.354224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.841 22:14:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.841 22:14:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:41.841 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:41.841 22:14:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:41.841 22:14:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:41.841 22:14:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.841 22:14:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:41.841 22:14:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.841 22:14:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:41.841 [2024-07-15 22:14:07.003616] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:41.841 Malloc0 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:41.841 [2024-07-15 22:14:07.063091] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:41.841 test case1: single bdev can't be used in multiple subsystems 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.841 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:41.841 [2024-07-15 22:14:07.099026] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:41.841 [2024-07-15 22:14:07.099045] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:41.841 [2024-07-15 22:14:07.099052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.841 request: 00:16:41.841 { 00:16:41.841 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:41.841 "namespace": { 00:16:41.841 "bdev_name": "Malloc0", 00:16:41.841 "no_auto_visible": false 00:16:41.841 }, 00:16:41.841 "method": "nvmf_subsystem_add_ns", 00:16:41.841 "req_id": 1 00:16:41.841 } 00:16:41.841 Got JSON-RPC error response 00:16:41.841 response: 00:16:41.841 { 00:16:41.841 "code": -32602, 00:16:41.841 "message": "Invalid parameters" 00:16:41.841 } 00:16:41.842 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:41.842 22:14:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:41.842 22:14:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:41.842 22:14:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:41.842 Adding namespace failed - expected result. 00:16:41.842 22:14:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:41.842 test case2: host connect to nvmf target in multiple paths 00:16:41.842 22:14:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:41.842 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.842 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:41.842 [2024-07-15 22:14:07.111158] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:41.842 22:14:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.842 22:14:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.757 22:14:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:45.141 22:14:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:45.141 22:14:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:45.141 22:14:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:45.141 22:14:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:45.141 22:14:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:47.050 22:14:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:47.050 22:14:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:47.050 22:14:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:47.050 22:14:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:47.050 22:14:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:47.050 22:14:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:47.050 22:14:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:47.050 [global] 00:16:47.050 thread=1 00:16:47.050 invalidate=1 00:16:47.050 rw=write 00:16:47.050 time_based=1 00:16:47.050 runtime=1 00:16:47.050 ioengine=libaio 00:16:47.050 direct=1 00:16:47.050 bs=4096 00:16:47.050 iodepth=1 00:16:47.050 norandommap=0 00:16:47.050 numjobs=1 00:16:47.050 00:16:47.050 verify_dump=1 00:16:47.050 verify_backlog=512 00:16:47.050 verify_state_save=0 00:16:47.050 do_verify=1 00:16:47.050 verify=crc32c-intel 00:16:47.050 [job0] 00:16:47.050 filename=/dev/nvme0n1 00:16:47.050 Could not set queue depth (nvme0n1) 00:16:47.310 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:47.310 fio-3.35 00:16:47.310 Starting 1 thread 00:16:48.692 00:16:48.693 job0: (groupid=0, jobs=1): err= 0: pid=2748685: Mon Jul 15 22:14:13 2024 00:16:48.693 read: IOPS=449, BW=1798KiB/s (1841kB/s)(1800KiB/1001msec) 00:16:48.693 slat (nsec): min=23657, max=57158, avg=25016.65, stdev=3791.57 00:16:48.693 clat (usec): min=997, max=1343, avg=1202.30, stdev=48.84 00:16:48.693 lat (usec): min=1022, max=1387, avg=1227.32, stdev=48.65 00:16:48.693 clat percentiles (usec): 00:16:48.693 | 1.00th=[ 1045], 5.00th=[ 1106], 10.00th=[ 1139], 20.00th=[ 1172], 00:16:48.693 | 30.00th=[ 1188], 40.00th=[ 1205], 50.00th=[ 1205], 60.00th=[ 1221], 00:16:48.693 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[ 1254], 95.00th=[ 1270], 00:16:48.693 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[ 1336], 99.95th=[ 1336], 00:16:48.693 | 99.99th=[ 1336] 00:16:48.693 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:48.693 slat (nsec): min=9847, max=67649, avg=31128.82, stdev=6714.49 00:16:48.693 clat (usec): min=541, max=1004, avg=828.67, stdev=80.25 00:16:48.693 lat (usec): min=552, max=1035, avg=859.79, stdev=81.83 00:16:48.693 clat percentiles (usec): 00:16:48.693 | 1.00th=[ 603], 5.00th=[ 676], 10.00th=[ 734], 20.00th=[ 766], 00:16:48.693 | 30.00th=[ 791], 40.00th=[ 807], 50.00th=[ 840], 60.00th=[ 865], 00:16:48.693 | 70.00th=[ 881], 80.00th=[ 898], 90.00th=[ 914], 95.00th=[ 938], 00:16:48.693 | 99.00th=[ 971], 99.50th=[ 988], 99.90th=[ 1004], 99.95th=[ 1004], 00:16:48.693 | 99.99th=[ 1004] 00:16:48.693 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:48.693 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:48.693 lat (usec) : 750=7.69%, 1000=45.53% 00:16:48.693 lat (msec) : 2=46.78% 00:16:48.693 cpu : usr=1.30%, sys=3.00%, ctx=962, majf=0, minf=1 00:16:48.693 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.693 issued rwts: total=450,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.693 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.693 00:16:48.693 Run status group 0 (all jobs): 00:16:48.693 READ: bw=1798KiB/s (1841kB/s), 1798KiB/s-1798KiB/s (1841kB/s-1841kB/s), io=1800KiB (1843kB), run=1001-1001msec 00:16:48.693 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:16:48.693 00:16:48.693 Disk stats (read/write): 00:16:48.693 nvme0n1: ios=417/512, merge=0/0, ticks=504/386, in_queue=890, util=94.59% 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:48.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:48.693 rmmod nvme_tcp 00:16:48.693 rmmod nvme_fabrics 00:16:48.693 rmmod nvme_keyring 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2747214 ']' 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2747214 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2747214 ']' 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2747214 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:48.693 22:14:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2747214 00:16:48.953 22:14:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:48.953 22:14:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:48.953 22:14:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2747214' 00:16:48.953 killing process with pid 2747214 00:16:48.953 22:14:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2747214 00:16:48.953 22:14:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2747214 00:16:48.953 22:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:48.953 22:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:48.953 22:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:48.953 22:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.953 22:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:48.953 22:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.953 22:14:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.953 22:14:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.504 22:14:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:51.504 00:16:51.504 real 0m17.288s 00:16:51.504 user 0m44.542s 00:16:51.504 sys 0m5.963s 00:16:51.504 22:14:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:51.504 22:14:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.504 ************************************ 00:16:51.504 END TEST nvmf_nmic 00:16:51.504 ************************************ 00:16:51.504 22:14:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:51.504 22:14:16 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:51.504 22:14:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:51.504 22:14:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:51.504 22:14:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:51.504 ************************************ 00:16:51.504 START TEST nvmf_fio_target 00:16:51.504 ************************************ 00:16:51.504 22:14:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:51.504 * Looking for test storage... 00:16:51.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:51.505 22:14:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:58.091 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.091 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:58.092 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:58.092 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:58.092 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:58.092 22:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:58.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:16:58.092 00:16:58.092 --- 10.0.0.2 ping statistics --- 00:16:58.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.092 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.397 ms 00:16:58.092 00:16:58.092 --- 10.0.0.1 ping statistics --- 00:16:58.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.092 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2753038 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2753038 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2753038 ']' 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.092 22:14:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.092 [2024-07-15 22:14:23.183890] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:16:58.092 [2024-07-15 22:14:23.183946] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.092 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.092 [2024-07-15 22:14:23.252582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.092 [2024-07-15 22:14:23.318702] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.092 [2024-07-15 22:14:23.318734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.092 [2024-07-15 22:14:23.318742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.092 [2024-07-15 22:14:23.318748] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.092 [2024-07-15 22:14:23.318754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.092 [2024-07-15 22:14:23.318887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.092 [2024-07-15 22:14:23.319007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.092 [2024-07-15 22:14:23.319166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.092 [2024-07-15 22:14:23.319167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.663 22:14:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.663 22:14:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:16:58.663 22:14:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:58.663 22:14:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:58.663 22:14:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.923 22:14:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.923 22:14:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:58.923 [2024-07-15 22:14:24.144260] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.923 22:14:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:59.184 22:14:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:59.184 22:14:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:59.445 22:14:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:59.445 22:14:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:59.445 22:14:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:59.445 22:14:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:59.706 22:14:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:59.706 22:14:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:59.966 22:14:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:59.966 22:14:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:59.966 22:14:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:00.227 22:14:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:00.227 22:14:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:00.492 22:14:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:00.492 22:14:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:00.492 22:14:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:00.838 22:14:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:00.838 22:14:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:00.838 22:14:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:00.838 22:14:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:01.099 22:14:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.099 [2024-07-15 22:14:26.409922] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.359 22:14:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:01.359 22:14:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:01.621 22:14:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:03.534 22:14:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:03.534 22:14:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:03.534 22:14:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:03.534 22:14:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:03.534 22:14:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:03.534 22:14:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:05.453 22:14:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:05.453 22:14:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:05.453 22:14:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:05.453 22:14:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:05.453 22:14:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:05.453 22:14:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:05.453 22:14:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:05.453 [global] 00:17:05.453 thread=1 00:17:05.453 invalidate=1 00:17:05.453 rw=write 00:17:05.453 time_based=1 00:17:05.453 runtime=1 00:17:05.453 ioengine=libaio 00:17:05.453 direct=1 00:17:05.453 bs=4096 00:17:05.453 iodepth=1 00:17:05.453 norandommap=0 00:17:05.453 numjobs=1 00:17:05.453 00:17:05.453 verify_dump=1 00:17:05.453 verify_backlog=512 00:17:05.453 verify_state_save=0 00:17:05.453 do_verify=1 00:17:05.453 verify=crc32c-intel 00:17:05.453 [job0] 00:17:05.453 filename=/dev/nvme0n1 00:17:05.453 [job1] 00:17:05.453 filename=/dev/nvme0n2 00:17:05.453 [job2] 00:17:05.453 filename=/dev/nvme0n3 00:17:05.453 [job3] 00:17:05.453 filename=/dev/nvme0n4 00:17:05.453 Could not set queue depth (nvme0n1) 00:17:05.453 Could not set queue depth (nvme0n2) 00:17:05.453 Could not set queue depth (nvme0n3) 00:17:05.453 Could not set queue depth (nvme0n4) 00:17:05.712 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:05.712 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:05.712 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:05.712 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:05.712 fio-3.35 00:17:05.712 Starting 4 threads 00:17:07.108 00:17:07.108 job0: (groupid=0, jobs=1): err= 0: pid=2754690: Mon Jul 15 22:14:32 2024 00:17:07.108 read: IOPS=15, BW=63.5KiB/s (65.0kB/s)(64.0KiB/1008msec) 00:17:07.108 slat (nsec): min=7484, max=30296, avg=24841.94, stdev=4874.65 00:17:07.108 clat (usec): min=958, max=42168, avg=34289.73, stdev=16520.90 00:17:07.108 lat (usec): min=988, max=42193, avg=34314.57, stdev=16522.64 00:17:07.108 clat percentiles (usec): 00:17:07.108 | 1.00th=[ 963], 5.00th=[ 963], 10.00th=[ 988], 20.00th=[41681], 00:17:07.108 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:07.108 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:07.108 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:07.108 | 99.99th=[42206] 00:17:07.108 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:17:07.108 slat (nsec): min=9600, max=67193, avg=32904.02, stdev=5797.91 00:17:07.108 clat (usec): min=552, max=1120, avg=854.96, stdev=89.11 00:17:07.108 lat (usec): min=585, max=1153, avg=887.87, stdev=90.18 00:17:07.108 clat percentiles (usec): 00:17:07.108 | 1.00th=[ 635], 5.00th=[ 693], 10.00th=[ 742], 20.00th=[ 783], 00:17:07.108 | 30.00th=[ 807], 40.00th=[ 840], 50.00th=[ 865], 60.00th=[ 889], 00:17:07.108 | 70.00th=[ 906], 80.00th=[ 930], 90.00th=[ 963], 95.00th=[ 988], 00:17:07.108 | 99.00th=[ 1012], 99.50th=[ 1029], 99.90th=[ 1123], 99.95th=[ 1123], 00:17:07.108 | 99.99th=[ 1123] 00:17:07.108 bw ( KiB/s): min= 4096, max= 4096, per=50.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:07.108 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:07.108 lat (usec) : 750=11.55%, 1000=83.14% 00:17:07.108 lat (msec) : 2=2.84%, 50=2.46% 00:17:07.108 cpu : usr=1.19%, sys=2.09%, ctx=528, majf=0, minf=1 00:17:07.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.108 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.108 job1: (groupid=0, jobs=1): err= 0: pid=2754691: Mon Jul 15 22:14:32 2024 00:17:07.108 read: IOPS=12, BW=51.4KiB/s (52.6kB/s)(52.0KiB/1012msec) 00:17:07.108 slat (nsec): min=26469, max=28002, avg=26969.08, stdev=396.81 00:17:07.108 clat (usec): min=36377, max=42021, avg=41496.89, stdev=1543.20 00:17:07.108 lat (usec): min=36403, max=42047, avg=41523.86, stdev=1543.35 00:17:07.108 clat percentiles (usec): 00:17:07.108 | 1.00th=[36439], 5.00th=[36439], 10.00th=[41681], 20.00th=[41681], 00:17:07.108 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:07.108 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:07.108 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:07.108 | 99.99th=[42206] 00:17:07.108 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:17:07.108 slat (nsec): min=10126, max=71698, avg=35768.08, stdev=4825.26 00:17:07.108 clat (usec): min=547, max=1188, avg=875.39, stdev=78.91 00:17:07.108 lat (usec): min=558, max=1243, avg=911.16, stdev=79.19 00:17:07.108 clat percentiles (usec): 00:17:07.108 | 1.00th=[ 685], 5.00th=[ 742], 10.00th=[ 766], 20.00th=[ 807], 00:17:07.108 | 30.00th=[ 840], 40.00th=[ 865], 50.00th=[ 881], 60.00th=[ 898], 00:17:07.108 | 70.00th=[ 922], 80.00th=[ 938], 90.00th=[ 963], 95.00th=[ 988], 00:17:07.109 | 99.00th=[ 1045], 99.50th=[ 1106], 99.90th=[ 1188], 99.95th=[ 1188], 00:17:07.109 | 99.99th=[ 1188] 00:17:07.109 bw ( KiB/s): min= 4096, max= 4096, per=50.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:07.109 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:07.109 lat (usec) : 750=5.71%, 1000=88.76% 00:17:07.109 lat (msec) : 2=3.05%, 50=2.48% 00:17:07.109 cpu : usr=1.19%, sys=2.18%, ctx=529, majf=0, minf=1 00:17:07.109 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.109 issued rwts: total=13,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.109 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.109 job2: (groupid=0, jobs=1): err= 0: pid=2754692: Mon Jul 15 22:14:32 2024 00:17:07.109 read: IOPS=12, BW=51.2KiB/s (52.5kB/s)(52.0KiB/1015msec) 00:17:07.109 slat (nsec): min=24466, max=25303, avg=24900.31, stdev=244.19 00:17:07.109 clat (usec): min=41535, max=42045, avg=41930.75, stdev=125.47 00:17:07.109 lat (usec): min=41560, max=42070, avg=41955.65, stdev=125.53 00:17:07.109 clat percentiles (usec): 00:17:07.109 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:07.109 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:07.109 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:07.109 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:07.109 | 99.99th=[42206] 00:17:07.109 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:17:07.109 slat (nsec): min=10064, max=67234, avg=33042.36, stdev=5982.51 00:17:07.109 clat (usec): min=633, max=1396, avg=872.58, stdev=86.07 00:17:07.109 lat (usec): min=667, max=1428, avg=905.62, stdev=86.97 00:17:07.109 clat percentiles (usec): 00:17:07.109 | 1.00th=[ 676], 5.00th=[ 750], 10.00th=[ 775], 20.00th=[ 799], 00:17:07.109 | 30.00th=[ 832], 40.00th=[ 857], 50.00th=[ 881], 60.00th=[ 898], 00:17:07.109 | 70.00th=[ 914], 80.00th=[ 930], 90.00th=[ 963], 95.00th=[ 996], 00:17:07.109 | 99.00th=[ 1090], 99.50th=[ 1303], 99.90th=[ 1401], 99.95th=[ 1401], 00:17:07.109 | 99.99th=[ 1401] 00:17:07.109 bw ( KiB/s): min= 4096, max= 4096, per=50.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:07.109 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:07.109 lat (usec) : 750=5.14%, 1000=88.19% 00:17:07.109 lat (msec) : 2=4.19%, 50=2.48% 00:17:07.109 cpu : usr=0.79%, sys=1.58%, ctx=527, majf=0, minf=1 00:17:07.109 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.109 issued rwts: total=13,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.109 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.109 job3: (groupid=0, jobs=1): err= 0: pid=2754693: Mon Jul 15 22:14:32 2024 00:17:07.109 read: IOPS=19, BW=78.7KiB/s (80.6kB/s)(80.0KiB/1016msec) 00:17:07.109 slat (nsec): min=24080, max=25022, avg=24523.45, stdev=280.95 00:17:07.109 clat (usec): min=817, max=42872, avg=35863.06, stdev=15091.37 00:17:07.109 lat (usec): min=842, max=42897, avg=35887.58, stdev=15091.28 00:17:07.109 clat percentiles (usec): 00:17:07.109 | 1.00th=[ 816], 5.00th=[ 816], 10.00th=[ 857], 20.00th=[41681], 00:17:07.109 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:07.109 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:07.109 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:07.109 | 99.99th=[42730] 00:17:07.109 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:17:07.109 slat (nsec): min=9597, max=54648, avg=30070.71, stdev=6598.87 00:17:07.109 clat (usec): min=192, max=809, avg=545.70, stdev=115.45 00:17:07.109 lat (usec): min=223, max=858, avg=575.77, stdev=116.41 00:17:07.109 clat percentiles (usec): 00:17:07.109 | 1.00th=[ 281], 5.00th=[ 355], 10.00th=[ 400], 20.00th=[ 441], 00:17:07.109 | 30.00th=[ 490], 40.00th=[ 523], 50.00th=[ 553], 60.00th=[ 578], 00:17:07.109 | 70.00th=[ 619], 80.00th=[ 652], 90.00th=[ 693], 95.00th=[ 742], 00:17:07.109 | 99.00th=[ 791], 99.50th=[ 799], 99.90th=[ 807], 99.95th=[ 807], 00:17:07.109 | 99.99th=[ 807] 00:17:07.109 bw ( KiB/s): min= 4096, max= 4096, per=50.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:07.109 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:07.109 lat (usec) : 250=0.56%, 500=31.02%, 750=61.65%, 1000=3.57% 00:17:07.109 lat (msec) : 50=3.20% 00:17:07.109 cpu : usr=0.69%, sys=1.58%, ctx=532, majf=0, minf=1 00:17:07.109 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.109 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.109 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.109 00:17:07.109 Run status group 0 (all jobs): 00:17:07.109 READ: bw=244KiB/s (250kB/s), 51.2KiB/s-78.7KiB/s (52.5kB/s-80.6kB/s), io=248KiB (254kB), run=1008-1016msec 00:17:07.109 WRITE: bw=8063KiB/s (8257kB/s), 2016KiB/s-2032KiB/s (2064kB/s-2081kB/s), io=8192KiB (8389kB), run=1008-1016msec 00:17:07.109 00:17:07.109 Disk stats (read/write): 00:17:07.109 nvme0n1: ios=58/512, merge=0/0, ticks=411/324, in_queue=735, util=88.28% 00:17:07.109 nvme0n2: ios=31/512, merge=0/0, ticks=1301/345, in_queue=1646, util=96.94% 00:17:07.109 nvme0n3: ios=30/512, merge=0/0, ticks=1261/408, in_queue=1669, util=96.83% 00:17:07.109 nvme0n4: ios=14/512, merge=0/0, ticks=507/272, in_queue=779, util=89.52% 00:17:07.109 22:14:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:07.109 [global] 00:17:07.109 thread=1 00:17:07.109 invalidate=1 00:17:07.109 rw=randwrite 00:17:07.109 time_based=1 00:17:07.109 runtime=1 00:17:07.109 ioengine=libaio 00:17:07.109 direct=1 00:17:07.109 bs=4096 00:17:07.109 iodepth=1 00:17:07.109 norandommap=0 00:17:07.109 numjobs=1 00:17:07.109 00:17:07.109 verify_dump=1 00:17:07.109 verify_backlog=512 00:17:07.109 verify_state_save=0 00:17:07.109 do_verify=1 00:17:07.109 verify=crc32c-intel 00:17:07.109 [job0] 00:17:07.109 filename=/dev/nvme0n1 00:17:07.109 [job1] 00:17:07.109 filename=/dev/nvme0n2 00:17:07.109 [job2] 00:17:07.109 filename=/dev/nvme0n3 00:17:07.109 [job3] 00:17:07.109 filename=/dev/nvme0n4 00:17:07.109 Could not set queue depth (nvme0n1) 00:17:07.109 Could not set queue depth (nvme0n2) 00:17:07.109 Could not set queue depth (nvme0n3) 00:17:07.109 Could not set queue depth (nvme0n4) 00:17:07.372 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.372 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.372 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.372 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.372 fio-3.35 00:17:07.372 Starting 4 threads 00:17:08.769 00:17:08.769 job0: (groupid=0, jobs=1): err= 0: pid=2755207: Mon Jul 15 22:14:33 2024 00:17:08.769 read: IOPS=15, BW=62.0KiB/s (63.5kB/s)(64.0KiB/1032msec) 00:17:08.769 slat (nsec): min=25444, max=30915, avg=26375.62, stdev=1275.64 00:17:08.769 clat (usec): min=1394, max=42955, avg=39472.03, stdev=10160.48 00:17:08.769 lat (usec): min=1420, max=42986, avg=39498.41, stdev=10160.72 00:17:08.769 clat percentiles (usec): 00:17:08.769 | 1.00th=[ 1401], 5.00th=[ 1401], 10.00th=[41157], 20.00th=[41681], 00:17:08.769 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:17:08.769 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:17:08.769 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:08.769 | 99.99th=[42730] 00:17:08.769 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:17:08.769 slat (nsec): min=9628, max=52381, avg=30248.57, stdev=8286.42 00:17:08.769 clat (usec): min=144, max=3184, avg=741.64, stdev=255.33 00:17:08.769 lat (usec): min=154, max=3217, avg=771.89, stdev=259.05 00:17:08.769 clat percentiles (usec): 00:17:08.769 | 1.00th=[ 182], 5.00th=[ 297], 10.00th=[ 367], 20.00th=[ 486], 00:17:08.769 | 30.00th=[ 652], 40.00th=[ 775], 50.00th=[ 824], 60.00th=[ 865], 00:17:08.769 | 70.00th=[ 898], 80.00th=[ 922], 90.00th=[ 955], 95.00th=[ 988], 00:17:08.769 | 99.00th=[ 1090], 99.50th=[ 1401], 99.90th=[ 3195], 99.95th=[ 3195], 00:17:08.769 | 99.99th=[ 3195] 00:17:08.769 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:17:08.769 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:08.769 lat (usec) : 250=1.70%, 500=19.32%, 750=14.96%, 1000=57.39% 00:17:08.769 lat (msec) : 2=3.60%, 4=0.19%, 50=2.84% 00:17:08.769 cpu : usr=1.07%, sys=1.75%, ctx=531, majf=0, minf=1 00:17:08.769 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:08.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.769 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.769 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:08.769 job1: (groupid=0, jobs=1): err= 0: pid=2755208: Mon Jul 15 22:14:33 2024 00:17:08.769 read: IOPS=14, BW=58.3KiB/s (59.7kB/s)(60.0KiB/1029msec) 00:17:08.769 slat (nsec): min=24450, max=29765, avg=25035.93, stdev=1324.15 00:17:08.769 clat (usec): min=41553, max=42042, avg=41932.92, stdev=119.50 00:17:08.769 lat (usec): min=41578, max=42067, avg=41957.95, stdev=119.63 00:17:08.769 clat percentiles (usec): 00:17:08.769 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:08.769 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:08.769 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:08.769 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:08.769 | 99.99th=[42206] 00:17:08.769 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:17:08.769 slat (nsec): min=3485, max=54129, avg=30327.62, stdev=7223.20 00:17:08.769 clat (usec): min=191, max=1221, avg=740.35, stdev=187.81 00:17:08.769 lat (usec): min=224, max=1271, avg=770.68, stdev=188.81 00:17:08.769 clat percentiles (usec): 00:17:08.769 | 1.00th=[ 302], 5.00th=[ 379], 10.00th=[ 445], 20.00th=[ 562], 00:17:08.769 | 30.00th=[ 660], 40.00th=[ 717], 50.00th=[ 791], 60.00th=[ 840], 00:17:08.769 | 70.00th=[ 873], 80.00th=[ 906], 90.00th=[ 938], 95.00th=[ 979], 00:17:08.769 | 99.00th=[ 1020], 99.50th=[ 1037], 99.90th=[ 1221], 99.95th=[ 1221], 00:17:08.769 | 99.99th=[ 1221] 00:17:08.769 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:17:08.769 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:08.769 lat (usec) : 250=0.19%, 500=13.85%, 750=27.51%, 1000=52.94% 00:17:08.769 lat (msec) : 2=2.66%, 50=2.85% 00:17:08.769 cpu : usr=0.88%, sys=1.36%, ctx=528, majf=0, minf=1 00:17:08.769 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:08.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.769 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.769 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:08.769 job2: (groupid=0, jobs=1): err= 0: pid=2755209: Mon Jul 15 22:14:33 2024 00:17:08.769 read: IOPS=16, BW=65.4KiB/s (67.0kB/s)(68.0KiB/1039msec) 00:17:08.769 slat (nsec): min=26300, max=31544, avg=26931.12, stdev=1224.49 00:17:08.769 clat (usec): min=1293, max=42030, avg=37033.63, stdev=13447.51 00:17:08.769 lat (usec): min=1319, max=42057, avg=37060.56, stdev=13447.59 00:17:08.769 clat percentiles (usec): 00:17:08.769 | 1.00th=[ 1287], 5.00th=[ 1287], 10.00th=[ 1352], 20.00th=[41681], 00:17:08.769 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:17:08.770 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:08.770 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:08.770 | 99.99th=[42206] 00:17:08.770 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:17:08.770 slat (nsec): min=9248, max=70116, avg=32872.31, stdev=7759.00 00:17:08.770 clat (usec): min=253, max=1212, avg=756.31, stdev=183.44 00:17:08.770 lat (usec): min=263, max=1245, avg=789.18, stdev=184.72 00:17:08.770 clat percentiles (usec): 00:17:08.770 | 1.00th=[ 310], 5.00th=[ 396], 10.00th=[ 482], 20.00th=[ 586], 00:17:08.770 | 30.00th=[ 676], 40.00th=[ 766], 50.00th=[ 799], 60.00th=[ 840], 00:17:08.770 | 70.00th=[ 881], 80.00th=[ 914], 90.00th=[ 955], 95.00th=[ 988], 00:17:08.770 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1205], 99.95th=[ 1205], 00:17:08.770 | 99.99th=[ 1205] 00:17:08.770 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:17:08.770 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:08.770 lat (usec) : 500=11.34%, 750=25.33%, 1000=56.52% 00:17:08.770 lat (msec) : 2=3.97%, 50=2.84% 00:17:08.770 cpu : usr=1.06%, sys=2.12%, ctx=530, majf=0, minf=1 00:17:08.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:08.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.770 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:08.770 job3: (groupid=0, jobs=1): err= 0: pid=2755210: Mon Jul 15 22:14:33 2024 00:17:08.770 read: IOPS=14, BW=58.0KiB/s (59.4kB/s)(60.0KiB/1035msec) 00:17:08.770 slat (nsec): min=25122, max=25908, avg=25454.87, stdev=251.90 00:17:08.770 clat (usec): min=1361, max=42046, avg=39228.01, stdev=10475.98 00:17:08.770 lat (usec): min=1386, max=42071, avg=39253.47, stdev=10476.03 00:17:08.770 clat percentiles (usec): 00:17:08.770 | 1.00th=[ 1369], 5.00th=[ 1369], 10.00th=[41681], 20.00th=[41681], 00:17:08.770 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:08.770 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:08.770 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:08.770 | 99.99th=[42206] 00:17:08.770 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:17:08.770 slat (nsec): min=2961, max=51110, avg=16506.96, stdev=10082.22 00:17:08.770 clat (usec): min=486, max=1090, avg=848.61, stdev=98.17 00:17:08.770 lat (usec): min=491, max=1101, avg=865.11, stdev=100.58 00:17:08.770 clat percentiles (usec): 00:17:08.770 | 1.00th=[ 578], 5.00th=[ 668], 10.00th=[ 717], 20.00th=[ 766], 00:17:08.770 | 30.00th=[ 799], 40.00th=[ 832], 50.00th=[ 865], 60.00th=[ 889], 00:17:08.770 | 70.00th=[ 906], 80.00th=[ 930], 90.00th=[ 963], 95.00th=[ 996], 00:17:08.770 | 99.00th=[ 1029], 99.50th=[ 1057], 99.90th=[ 1090], 99.95th=[ 1090], 00:17:08.770 | 99.99th=[ 1090] 00:17:08.770 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:17:08.770 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:08.770 lat (usec) : 500=0.19%, 750=14.61%, 1000=78.56% 00:17:08.770 lat (msec) : 2=3.98%, 50=2.66% 00:17:08.770 cpu : usr=0.58%, sys=0.48%, ctx=528, majf=0, minf=1 00:17:08.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:08.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.770 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:08.770 00:17:08.770 Run status group 0 (all jobs): 00:17:08.770 READ: bw=243KiB/s (248kB/s), 58.0KiB/s-65.4KiB/s (59.4kB/s-67.0kB/s), io=252KiB (258kB), run=1029-1039msec 00:17:08.770 WRITE: bw=7885KiB/s (8074kB/s), 1971KiB/s-1990KiB/s (2018kB/s-2038kB/s), io=8192KiB (8389kB), run=1029-1039msec 00:17:08.770 00:17:08.770 Disk stats (read/write): 00:17:08.770 nvme0n1: ios=44/512, merge=0/0, ticks=1037/301, in_queue=1338, util=99.90% 00:17:08.770 nvme0n2: ios=40/512, merge=0/0, ticks=1388/358, in_queue=1746, util=97.35% 00:17:08.770 nvme0n3: ios=68/512, merge=0/0, ticks=622/300, in_queue=922, util=97.37% 00:17:08.770 nvme0n4: ios=66/512, merge=0/0, ticks=668/421, in_queue=1089, util=97.44% 00:17:08.770 22:14:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:08.770 [global] 00:17:08.770 thread=1 00:17:08.770 invalidate=1 00:17:08.770 rw=write 00:17:08.770 time_based=1 00:17:08.770 runtime=1 00:17:08.770 ioengine=libaio 00:17:08.770 direct=1 00:17:08.770 bs=4096 00:17:08.770 iodepth=128 00:17:08.770 norandommap=0 00:17:08.770 numjobs=1 00:17:08.770 00:17:08.770 verify_dump=1 00:17:08.770 verify_backlog=512 00:17:08.770 verify_state_save=0 00:17:08.770 do_verify=1 00:17:08.770 verify=crc32c-intel 00:17:08.770 [job0] 00:17:08.770 filename=/dev/nvme0n1 00:17:08.770 [job1] 00:17:08.770 filename=/dev/nvme0n2 00:17:08.770 [job2] 00:17:08.770 filename=/dev/nvme0n3 00:17:08.770 [job3] 00:17:08.770 filename=/dev/nvme0n4 00:17:08.770 Could not set queue depth (nvme0n1) 00:17:08.770 Could not set queue depth (nvme0n2) 00:17:08.770 Could not set queue depth (nvme0n3) 00:17:08.770 Could not set queue depth (nvme0n4) 00:17:09.032 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.032 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.032 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.032 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.032 fio-3.35 00:17:09.032 Starting 4 threads 00:17:10.439 00:17:10.439 job0: (groupid=0, jobs=1): err= 0: pid=2755737: Mon Jul 15 22:14:35 2024 00:17:10.439 read: IOPS=5861, BW=22.9MiB/s (24.0MB/s)(23.0MiB/1004msec) 00:17:10.439 slat (nsec): min=916, max=25109k, avg=84279.11, stdev=637923.46 00:17:10.439 clat (usec): min=1404, max=31448, avg=10948.68, stdev=5081.72 00:17:10.439 lat (usec): min=2928, max=31456, avg=11032.95, stdev=5111.74 00:17:10.439 clat percentiles (usec): 00:17:10.439 | 1.00th=[ 4752], 5.00th=[ 6128], 10.00th=[ 6849], 20.00th=[ 7439], 00:17:10.439 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[10028], 00:17:10.439 | 70.00th=[12125], 80.00th=[13829], 90.00th=[16909], 95.00th=[24249], 00:17:10.439 | 99.00th=[29230], 99.50th=[30016], 99.90th=[31327], 99.95th=[31327], 00:17:10.439 | 99.99th=[31327] 00:17:10.439 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:17:10.439 slat (nsec): min=1573, max=28776k, avg=73521.60, stdev=548119.43 00:17:10.439 clat (usec): min=1797, max=32025, avg=9749.87, stdev=4207.50 00:17:10.439 lat (usec): min=1824, max=32036, avg=9823.39, stdev=4234.66 00:17:10.439 clat percentiles (usec): 00:17:10.439 | 1.00th=[ 3163], 5.00th=[ 4817], 10.00th=[ 5538], 20.00th=[ 6325], 00:17:10.439 | 30.00th=[ 7046], 40.00th=[ 7767], 50.00th=[ 8848], 60.00th=[10028], 00:17:10.439 | 70.00th=[11469], 80.00th=[13173], 90.00th=[15008], 95.00th=[17957], 00:17:10.439 | 99.00th=[21627], 99.50th=[32113], 99.90th=[32113], 99.95th=[32113], 00:17:10.439 | 99.99th=[32113] 00:17:10.439 bw ( KiB/s): min=21560, max=27592, per=29.13%, avg=24576.00, stdev=4265.27, samples=2 00:17:10.439 iops : min= 5390, max= 6898, avg=6144.00, stdev=1066.32, samples=2 00:17:10.439 lat (msec) : 2=0.04%, 4=1.68%, 10=57.53%, 20=37.35%, 50=3.40% 00:17:10.439 cpu : usr=2.49%, sys=5.98%, ctx=642, majf=0, minf=1 00:17:10.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:10.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:10.439 issued rwts: total=5885,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:10.439 job1: (groupid=0, jobs=1): err= 0: pid=2755738: Mon Jul 15 22:14:35 2024 00:17:10.439 read: IOPS=5395, BW=21.1MiB/s (22.1MB/s)(22.0MiB/1044msec) 00:17:10.439 slat (nsec): min=903, max=23125k, avg=87064.05, stdev=596694.47 00:17:10.439 clat (usec): min=2817, max=41706, avg=11503.19, stdev=3540.69 00:17:10.439 lat (usec): min=2842, max=43678, avg=11590.25, stdev=3545.49 00:17:10.440 clat percentiles (usec): 00:17:10.440 | 1.00th=[ 3720], 5.00th=[ 7046], 10.00th=[ 8291], 20.00th=[ 9503], 00:17:10.440 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11207], 60.00th=[11600], 00:17:10.440 | 70.00th=[12125], 80.00th=[12911], 90.00th=[15270], 95.00th=[17433], 00:17:10.440 | 99.00th=[26084], 99.50th=[32375], 99.90th=[32900], 99.95th=[32900], 00:17:10.440 | 99.99th=[41681] 00:17:10.440 write: IOPS=5885, BW=23.0MiB/s (24.1MB/s)(24.0MiB/1044msec); 0 zone resets 00:17:10.440 slat (nsec): min=1584, max=7152.7k, avg=74484.52, stdev=402087.26 00:17:10.440 clat (usec): min=850, max=49367, avg=10980.65, stdev=6639.84 00:17:10.440 lat (usec): min=861, max=49369, avg=11055.13, stdev=6642.75 00:17:10.440 clat percentiles (usec): 00:17:10.440 | 1.00th=[ 3130], 5.00th=[ 6128], 10.00th=[ 7111], 20.00th=[ 7832], 00:17:10.440 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9503], 00:17:10.440 | 70.00th=[10552], 80.00th=[12649], 90.00th=[15795], 95.00th=[20055], 00:17:10.440 | 99.00th=[47449], 99.50th=[47449], 99.90th=[49546], 99.95th=[49546], 00:17:10.440 | 99.99th=[49546] 00:17:10.440 bw ( KiB/s): min=23224, max=24912, per=28.53%, avg=24068.00, stdev=1193.60, samples=2 00:17:10.440 iops : min= 5806, max= 6228, avg=6017.00, stdev=298.40, samples=2 00:17:10.440 lat (usec) : 1000=0.04% 00:17:10.440 lat (msec) : 2=0.15%, 4=1.57%, 10=46.87%, 20=48.07%, 50=3.29% 00:17:10.440 cpu : usr=2.97%, sys=4.60%, ctx=688, majf=0, minf=1 00:17:10.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:10.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:10.440 issued rwts: total=5633,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:10.440 job2: (groupid=0, jobs=1): err= 0: pid=2755739: Mon Jul 15 22:14:35 2024 00:17:10.440 read: IOPS=4085, BW=16.0MiB/s (16.7MB/s)(16.7MiB/1044msec) 00:17:10.440 slat (nsec): min=917, max=14299k, avg=113893.30, stdev=705223.73 00:17:10.440 clat (usec): min=4832, max=57093, avg=16082.43, stdev=8815.41 00:17:10.440 lat (usec): min=4835, max=57904, avg=16196.32, stdev=8830.97 00:17:10.440 clat percentiles (usec): 00:17:10.440 | 1.00th=[ 5800], 5.00th=[ 7111], 10.00th=[ 8160], 20.00th=[ 9634], 00:17:10.440 | 30.00th=[11338], 40.00th=[12387], 50.00th=[14353], 60.00th=[15270], 00:17:10.440 | 70.00th=[17433], 80.00th=[20055], 90.00th=[27132], 95.00th=[32900], 00:17:10.440 | 99.00th=[52691], 99.50th=[53216], 99.90th=[56886], 99.95th=[56886], 00:17:10.440 | 99.99th=[56886] 00:17:10.440 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1044msec); 0 zone resets 00:17:10.440 slat (nsec): min=1588, max=22924k, avg=105000.95, stdev=727168.86 00:17:10.440 clat (usec): min=1310, max=36733, avg=13202.64, stdev=5392.41 00:17:10.440 lat (usec): min=1321, max=36743, avg=13307.64, stdev=5419.21 00:17:10.440 clat percentiles (usec): 00:17:10.440 | 1.00th=[ 3556], 5.00th=[ 6849], 10.00th=[ 7963], 20.00th=[ 9372], 00:17:10.440 | 30.00th=[10159], 40.00th=[11207], 50.00th=[12387], 60.00th=[13173], 00:17:10.440 | 70.00th=[14353], 80.00th=[16712], 90.00th=[19268], 95.00th=[21627], 00:17:10.440 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:17:10.440 | 99.99th=[36963] 00:17:10.440 bw ( KiB/s): min=16384, max=20480, per=21.85%, avg=18432.00, stdev=2896.31, samples=2 00:17:10.440 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:17:10.440 lat (msec) : 2=0.14%, 4=0.71%, 10=23.00%, 20=62.03%, 50=13.17% 00:17:10.440 lat (msec) : 100=0.95% 00:17:10.440 cpu : usr=2.49%, sys=3.93%, ctx=472, majf=0, minf=1 00:17:10.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:10.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:10.440 issued rwts: total=4265,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:10.440 job3: (groupid=0, jobs=1): err= 0: pid=2755740: Mon Jul 15 22:14:35 2024 00:17:10.440 read: IOPS=4713, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1005msec) 00:17:10.440 slat (nsec): min=883, max=19301k, avg=108922.37, stdev=744193.10 00:17:10.440 clat (usec): min=1255, max=37110, avg=14126.53, stdev=5284.11 00:17:10.440 lat (usec): min=4341, max=37115, avg=14235.45, stdev=5318.58 00:17:10.440 clat percentiles (usec): 00:17:10.440 | 1.00th=[ 4752], 5.00th=[ 7242], 10.00th=[ 8455], 20.00th=[ 9765], 00:17:10.440 | 30.00th=[11338], 40.00th=[12649], 50.00th=[13566], 60.00th=[14353], 00:17:10.440 | 70.00th=[15533], 80.00th=[17433], 90.00th=[21103], 95.00th=[22938], 00:17:10.440 | 99.00th=[34866], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:17:10.440 | 99.99th=[36963] 00:17:10.440 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:17:10.440 slat (nsec): min=1525, max=6635.9k, avg=84616.09, stdev=432784.22 00:17:10.440 clat (usec): min=1188, max=41784, avg=11820.68, stdev=5183.53 00:17:10.440 lat (usec): min=1197, max=41794, avg=11905.29, stdev=5197.20 00:17:10.440 clat percentiles (usec): 00:17:10.440 | 1.00th=[ 4817], 5.00th=[ 6128], 10.00th=[ 6849], 20.00th=[ 7963], 00:17:10.440 | 30.00th=[ 8717], 40.00th=[ 9896], 50.00th=[10945], 60.00th=[12125], 00:17:10.440 | 70.00th=[13566], 80.00th=[15139], 90.00th=[16909], 95.00th=[18220], 00:17:10.440 | 99.00th=[35914], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:10.440 | 99.99th=[41681] 00:17:10.440 bw ( KiB/s): min=19680, max=21280, per=24.28%, avg=20480.00, stdev=1131.37, samples=2 00:17:10.440 iops : min= 4920, max= 5320, avg=5120.00, stdev=282.84, samples=2 00:17:10.440 lat (msec) : 2=0.03%, 4=0.15%, 10=31.80%, 20=60.64%, 50=7.38% 00:17:10.440 cpu : usr=3.78%, sys=3.69%, ctx=620, majf=0, minf=1 00:17:10.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:10.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:10.440 issued rwts: total=4737,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:10.440 00:17:10.440 Run status group 0 (all jobs): 00:17:10.440 READ: bw=76.8MiB/s (80.5MB/s), 16.0MiB/s-22.9MiB/s (16.7MB/s-24.0MB/s), io=80.2MiB (84.0MB), run=1004-1044msec 00:17:10.440 WRITE: bw=82.4MiB/s (86.4MB/s), 17.2MiB/s-23.9MiB/s (18.1MB/s-25.1MB/s), io=86.0MiB (90.2MB), run=1004-1044msec 00:17:10.440 00:17:10.440 Disk stats (read/write): 00:17:10.440 nvme0n1: ios=5165/5235, merge=0/0, ticks=29616/27378, in_queue=56994, util=97.70% 00:17:10.440 nvme0n2: ios=4633/4953, merge=0/0, ticks=23412/21168, in_queue=44580, util=96.94% 00:17:10.440 nvme0n3: ios=3620/3890, merge=0/0, ticks=18290/19341, in_queue=37631, util=96.31% 00:17:10.440 nvme0n4: ios=4046/4096, merge=0/0, ticks=28499/26213, in_queue=54712, util=87.85% 00:17:10.440 22:14:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:10.440 [global] 00:17:10.440 thread=1 00:17:10.440 invalidate=1 00:17:10.440 rw=randwrite 00:17:10.440 time_based=1 00:17:10.440 runtime=1 00:17:10.440 ioengine=libaio 00:17:10.440 direct=1 00:17:10.440 bs=4096 00:17:10.440 iodepth=128 00:17:10.440 norandommap=0 00:17:10.440 numjobs=1 00:17:10.440 00:17:10.440 verify_dump=1 00:17:10.440 verify_backlog=512 00:17:10.440 verify_state_save=0 00:17:10.440 do_verify=1 00:17:10.440 verify=crc32c-intel 00:17:10.440 [job0] 00:17:10.440 filename=/dev/nvme0n1 00:17:10.440 [job1] 00:17:10.440 filename=/dev/nvme0n2 00:17:10.440 [job2] 00:17:10.440 filename=/dev/nvme0n3 00:17:10.440 [job3] 00:17:10.440 filename=/dev/nvme0n4 00:17:10.440 Could not set queue depth (nvme0n1) 00:17:10.440 Could not set queue depth (nvme0n2) 00:17:10.440 Could not set queue depth (nvme0n3) 00:17:10.440 Could not set queue depth (nvme0n4) 00:17:10.708 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:10.708 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:10.708 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:10.708 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:10.708 fio-3.35 00:17:10.708 Starting 4 threads 00:17:12.123 00:17:12.123 job0: (groupid=0, jobs=1): err= 0: pid=2756261: Mon Jul 15 22:14:37 2024 00:17:12.123 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:17:12.123 slat (nsec): min=854, max=11615k, avg=90695.16, stdev=568026.56 00:17:12.123 clat (usec): min=1558, max=39470, avg=12375.93, stdev=6873.70 00:17:12.123 lat (usec): min=1564, max=39475, avg=12466.63, stdev=6903.95 00:17:12.123 clat percentiles (usec): 00:17:12.123 | 1.00th=[ 3785], 5.00th=[ 6128], 10.00th=[ 7242], 20.00th=[ 8291], 00:17:12.123 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10290], 00:17:12.123 | 70.00th=[11076], 80.00th=[16909], 90.00th=[23200], 95.00th=[28443], 00:17:12.123 | 99.00th=[34866], 99.50th=[36963], 99.90th=[39584], 99.95th=[39584], 00:17:12.123 | 99.99th=[39584] 00:17:12.123 write: IOPS=5684, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1003msec); 0 zone resets 00:17:12.123 slat (nsec): min=1443, max=8241.3k, avg=79698.96, stdev=462888.93 00:17:12.123 clat (usec): min=792, max=26880, avg=10041.06, stdev=4420.42 00:17:12.123 lat (usec): min=798, max=26888, avg=10120.76, stdev=4438.02 00:17:12.123 clat percentiles (usec): 00:17:12.123 | 1.00th=[ 3064], 5.00th=[ 4490], 10.00th=[ 5538], 20.00th=[ 7046], 00:17:12.123 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9372], 00:17:12.123 | 70.00th=[10421], 80.00th=[12649], 90.00th=[17433], 95.00th=[19530], 00:17:12.123 | 99.00th=[25560], 99.50th=[26608], 99.90th=[26870], 99.95th=[26870], 00:17:12.123 | 99.99th=[26870] 00:17:12.123 bw ( KiB/s): min=17560, max=27496, per=24.82%, avg=22528.00, stdev=7025.81, samples=2 00:17:12.123 iops : min= 4390, max= 6874, avg=5632.00, stdev=1756.45, samples=2 00:17:12.123 lat (usec) : 1000=0.04% 00:17:12.123 lat (msec) : 2=0.21%, 4=2.44%, 10=58.55%, 20=29.81%, 50=8.94% 00:17:12.123 cpu : usr=2.99%, sys=4.39%, ctx=616, majf=0, minf=1 00:17:12.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:12.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:12.123 issued rwts: total=5632,5702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:12.123 job1: (groupid=0, jobs=1): err= 0: pid=2756262: Mon Jul 15 22:14:37 2024 00:17:12.123 read: IOPS=5325, BW=20.8MiB/s (21.8MB/s)(20.9MiB/1003msec) 00:17:12.123 slat (nsec): min=901, max=7901.0k, avg=91126.74, stdev=506332.29 00:17:12.123 clat (usec): min=2002, max=32053, avg=11753.79, stdev=4359.04 00:17:12.123 lat (usec): min=2291, max=32077, avg=11844.92, stdev=4397.60 00:17:12.123 clat percentiles (usec): 00:17:12.123 | 1.00th=[ 6194], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[ 9110], 00:17:12.123 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10290], 60.00th=[10683], 00:17:12.123 | 70.00th=[11207], 80.00th=[13566], 90.00th=[18482], 95.00th=[20841], 00:17:12.123 | 99.00th=[28705], 99.50th=[29754], 99.90th=[29754], 99.95th=[29754], 00:17:12.123 | 99.99th=[32113] 00:17:12.123 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:17:12.123 slat (nsec): min=1519, max=8154.6k, avg=87388.34, stdev=492777.84 00:17:12.123 clat (usec): min=5435, max=30586, avg=11345.90, stdev=3800.73 00:17:12.123 lat (usec): min=5442, max=30619, avg=11433.29, stdev=3845.15 00:17:12.123 clat percentiles (usec): 00:17:12.123 | 1.00th=[ 6390], 5.00th=[ 7242], 10.00th=[ 7832], 20.00th=[ 8356], 00:17:12.123 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[11076], 00:17:12.123 | 70.00th=[11863], 80.00th=[13698], 90.00th=[16909], 95.00th=[19792], 00:17:12.123 | 99.00th=[23725], 99.50th=[23987], 99.90th=[24773], 99.95th=[27132], 00:17:12.123 | 99.99th=[30540] 00:17:12.123 bw ( KiB/s): min=20480, max=24576, per=24.82%, avg=22528.00, stdev=2896.31, samples=2 00:17:12.123 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:17:12.123 lat (msec) : 4=0.10%, 10=41.97%, 20=52.25%, 50=5.69% 00:17:12.123 cpu : usr=2.79%, sys=4.59%, ctx=606, majf=0, minf=1 00:17:12.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:12.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:12.123 issued rwts: total=5341,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:12.123 job2: (groupid=0, jobs=1): err= 0: pid=2756263: Mon Jul 15 22:14:37 2024 00:17:12.123 read: IOPS=4877, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1003msec) 00:17:12.123 slat (nsec): min=996, max=44621k, avg=102718.86, stdev=1014359.01 00:17:12.123 clat (usec): min=2752, max=62640, avg=13365.99, stdev=9255.11 00:17:12.123 lat (usec): min=2757, max=62666, avg=13468.71, stdev=9309.66 00:17:12.123 clat percentiles (usec): 00:17:12.123 | 1.00th=[ 4817], 5.00th=[ 7504], 10.00th=[ 8586], 20.00th=[ 9110], 00:17:12.123 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11338], 60.00th=[11994], 00:17:12.123 | 70.00th=[13042], 80.00th=[14353], 90.00th=[16909], 95.00th=[20317], 00:17:12.123 | 99.00th=[57934], 99.50th=[57934], 99.90th=[61080], 99.95th=[61080], 00:17:12.123 | 99.99th=[62653] 00:17:12.123 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:17:12.123 slat (nsec): min=1594, max=26367k, avg=76300.10, stdev=677259.44 00:17:12.123 clat (usec): min=1195, max=54714, avg=11590.44, stdev=7511.85 00:17:12.123 lat (usec): min=1203, max=54723, avg=11666.74, stdev=7562.93 00:17:12.123 clat percentiles (usec): 00:17:12.123 | 1.00th=[ 2278], 5.00th=[ 3884], 10.00th=[ 5473], 20.00th=[ 7111], 00:17:12.123 | 30.00th=[ 8455], 40.00th=[ 9372], 50.00th=[10159], 60.00th=[10945], 00:17:12.123 | 70.00th=[11994], 80.00th=[13566], 90.00th=[17957], 95.00th=[22676], 00:17:12.123 | 99.00th=[46400], 99.50th=[50594], 99.90th=[54789], 99.95th=[54789], 00:17:12.123 | 99.99th=[54789] 00:17:12.123 bw ( KiB/s): min=16384, max=24576, per=22.56%, avg=20480.00, stdev=5792.62, samples=2 00:17:12.123 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:17:12.123 lat (msec) : 2=0.33%, 4=2.65%, 10=37.36%, 20=53.28%, 50=4.03% 00:17:12.123 lat (msec) : 100=2.37% 00:17:12.123 cpu : usr=3.29%, sys=6.39%, ctx=360, majf=0, minf=1 00:17:12.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:12.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:12.123 issued rwts: total=4892,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:12.123 job3: (groupid=0, jobs=1): err= 0: pid=2756264: Mon Jul 15 22:14:37 2024 00:17:12.123 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:17:12.123 slat (nsec): min=923, max=10157k, avg=83328.37, stdev=518440.38 00:17:12.123 clat (usec): min=5414, max=31924, avg=10797.99, stdev=3853.41 00:17:12.123 lat (usec): min=5415, max=31936, avg=10881.32, stdev=3890.93 00:17:12.123 clat percentiles (usec): 00:17:12.123 | 1.00th=[ 6390], 5.00th=[ 7242], 10.00th=[ 7767], 20.00th=[ 8160], 00:17:12.123 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[10028], 00:17:12.123 | 70.00th=[11076], 80.00th=[13435], 90.00th=[16581], 95.00th=[18744], 00:17:12.123 | 99.00th=[23462], 99.50th=[27395], 99.90th=[27657], 99.95th=[27657], 00:17:12.123 | 99.99th=[31851] 00:17:12.123 write: IOPS=6296, BW=24.6MiB/s (25.8MB/s)(24.6MiB/1002msec); 0 zone resets 00:17:12.123 slat (nsec): min=1533, max=10228k, avg=73850.79, stdev=425733.12 00:17:12.123 clat (usec): min=729, max=22006, avg=9604.21, stdev=3075.40 00:17:12.123 lat (usec): min=3265, max=22030, avg=9678.06, stdev=3099.03 00:17:12.123 clat percentiles (usec): 00:17:12.123 | 1.00th=[ 5080], 5.00th=[ 6194], 10.00th=[ 6718], 20.00th=[ 7111], 00:17:12.123 | 30.00th=[ 7439], 40.00th=[ 7898], 50.00th=[ 8717], 60.00th=[ 9765], 00:17:12.123 | 70.00th=[10814], 80.00th=[11863], 90.00th=[14091], 95.00th=[16188], 00:17:12.123 | 99.00th=[17957], 99.50th=[18744], 99.90th=[20579], 99.95th=[21365], 00:17:12.123 | 99.99th=[21890] 00:17:12.123 bw ( KiB/s): min=24576, max=24880, per=27.24%, avg=24728.00, stdev=214.96, samples=2 00:17:12.123 iops : min= 6144, max= 6220, avg=6182.00, stdev=53.74, samples=2 00:17:12.124 lat (usec) : 750=0.01% 00:17:12.124 lat (msec) : 4=0.49%, 10=60.83%, 20=36.55%, 50=2.12% 00:17:12.124 cpu : usr=3.30%, sys=4.80%, ctx=700, majf=0, minf=1 00:17:12.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:12.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:12.124 issued rwts: total=6144,6309,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:12.124 00:17:12.124 Run status group 0 (all jobs): 00:17:12.124 READ: bw=85.7MiB/s (89.9MB/s), 19.1MiB/s-24.0MiB/s (20.0MB/s-25.1MB/s), io=86.0MiB (90.1MB), run=1002-1003msec 00:17:12.124 WRITE: bw=88.7MiB/s (93.0MB/s), 19.9MiB/s-24.6MiB/s (20.9MB/s-25.8MB/s), io=88.9MiB (93.2MB), run=1002-1003msec 00:17:12.124 00:17:12.124 Disk stats (read/write): 00:17:12.124 nvme0n1: ios=4914/5120, merge=0/0, ticks=23068/19864, in_queue=42932, util=88.38% 00:17:12.124 nvme0n2: ios=4214/4608, merge=0/0, ticks=17599/17240, in_queue=34839, util=97.76% 00:17:12.124 nvme0n3: ios=3615/4061, merge=0/0, ticks=29092/24057, in_queue=53149, util=99.47% 00:17:12.124 nvme0n4: ios=5192/5632, merge=0/0, ticks=21397/18066, in_queue=39463, util=96.05% 00:17:12.124 22:14:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:12.124 22:14:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2756596 00:17:12.124 22:14:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:12.124 22:14:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:12.124 [global] 00:17:12.124 thread=1 00:17:12.124 invalidate=1 00:17:12.124 rw=read 00:17:12.124 time_based=1 00:17:12.124 runtime=10 00:17:12.124 ioengine=libaio 00:17:12.124 direct=1 00:17:12.124 bs=4096 00:17:12.124 iodepth=1 00:17:12.124 norandommap=1 00:17:12.124 numjobs=1 00:17:12.124 00:17:12.124 [job0] 00:17:12.124 filename=/dev/nvme0n1 00:17:12.124 [job1] 00:17:12.124 filename=/dev/nvme0n2 00:17:12.124 [job2] 00:17:12.124 filename=/dev/nvme0n3 00:17:12.124 [job3] 00:17:12.124 filename=/dev/nvme0n4 00:17:12.124 Could not set queue depth (nvme0n1) 00:17:12.124 Could not set queue depth (nvme0n2) 00:17:12.124 Could not set queue depth (nvme0n3) 00:17:12.124 Could not set queue depth (nvme0n4) 00:17:12.382 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:12.382 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:12.382 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:12.382 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:12.382 fio-3.35 00:17:12.382 Starting 4 threads 00:17:14.929 22:14:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:15.191 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=11087872, buflen=4096 00:17:15.191 fio: pid=2756788, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:15.191 22:14:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:15.191 22:14:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:15.191 22:14:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:15.191 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=3268608, buflen=4096 00:17:15.191 fio: pid=2756787, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:15.452 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1929216, buflen=4096 00:17:15.452 fio: pid=2756785, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:15.452 22:14:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:15.452 22:14:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:15.452 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=8613888, buflen=4096 00:17:15.452 fio: pid=2756786, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:15.452 22:14:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:15.452 22:14:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:15.713 00:17:15.713 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2756785: Mon Jul 15 22:14:40 2024 00:17:15.713 read: IOPS=162, BW=648KiB/s (663kB/s)(1884KiB/2909msec) 00:17:15.713 slat (usec): min=6, max=115, avg=25.84, stdev= 5.59 00:17:15.713 clat (usec): min=628, max=42096, avg=6096.75, stdev=13530.18 00:17:15.713 lat (usec): min=654, max=42122, avg=6122.59, stdev=13530.83 00:17:15.713 clat percentiles (usec): 00:17:15.713 | 1.00th=[ 848], 5.00th=[ 922], 10.00th=[ 938], 20.00th=[ 955], 00:17:15.713 | 30.00th=[ 963], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 996], 00:17:15.713 | 70.00th=[ 1012], 80.00th=[ 1045], 90.00th=[41681], 95.00th=[42206], 00:17:15.713 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:15.713 | 99.99th=[42206] 00:17:15.713 bw ( KiB/s): min= 96, max= 2696, per=9.29%, avg=736.00, stdev=1117.58, samples=5 00:17:15.713 iops : min= 24, max= 674, avg=184.00, stdev=279.40, samples=5 00:17:15.713 lat (usec) : 750=0.21%, 1000=60.17% 00:17:15.713 lat (msec) : 2=26.91%, 50=12.50% 00:17:15.713 cpu : usr=0.07%, sys=0.58%, ctx=474, majf=0, minf=1 00:17:15.713 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.713 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.713 issued rwts: total=472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.713 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.713 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2756786: Mon Jul 15 22:14:40 2024 00:17:15.713 read: IOPS=685, BW=2742KiB/s (2808kB/s)(8412KiB/3068msec) 00:17:15.713 slat (usec): min=6, max=18911, avg=33.98, stdev=411.79 00:17:15.713 clat (usec): min=588, max=42295, avg=1413.35, stdev=2801.42 00:17:15.713 lat (usec): min=613, max=42319, avg=1447.33, stdev=2831.29 00:17:15.713 clat percentiles (usec): 00:17:15.713 | 1.00th=[ 889], 5.00th=[ 988], 10.00th=[ 1045], 20.00th=[ 1123], 00:17:15.713 | 30.00th=[ 1172], 40.00th=[ 1205], 50.00th=[ 1237], 60.00th=[ 1254], 00:17:15.713 | 70.00th=[ 1287], 80.00th=[ 1319], 90.00th=[ 1369], 95.00th=[ 1401], 00:17:15.713 | 99.00th=[ 1532], 99.50th=[ 3982], 99.90th=[42206], 99.95th=[42206], 00:17:15.713 | 99.99th=[42206] 00:17:15.713 bw ( KiB/s): min= 2960, max= 3224, per=39.47%, avg=3128.00, stdev=103.85, samples=5 00:17:15.713 iops : min= 740, max= 806, avg=782.00, stdev=25.96, samples=5 00:17:15.713 lat (usec) : 750=0.38%, 1000=6.04% 00:17:15.713 lat (msec) : 2=93.01%, 4=0.05%, 50=0.48% 00:17:15.713 cpu : usr=0.68%, sys=2.09%, ctx=2107, majf=0, minf=1 00:17:15.713 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.713 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.713 issued rwts: total=2104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.713 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.713 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2756787: Mon Jul 15 22:14:40 2024 00:17:15.713 read: IOPS=290, BW=1159KiB/s (1187kB/s)(3192KiB/2753msec) 00:17:15.713 slat (nsec): min=7015, max=58477, avg=24525.85, stdev=3341.15 00:17:15.713 clat (usec): min=409, max=42190, avg=3392.51, stdev=9430.13 00:17:15.713 lat (usec): min=434, max=42214, avg=3417.04, stdev=9430.04 00:17:15.713 clat percentiles (usec): 00:17:15.714 | 1.00th=[ 594], 5.00th=[ 734], 10.00th=[ 799], 20.00th=[ 922], 00:17:15.714 | 30.00th=[ 1012], 40.00th=[ 1074], 50.00th=[ 1139], 60.00th=[ 1205], 00:17:15.714 | 70.00th=[ 1237], 80.00th=[ 1270], 90.00th=[ 1352], 95.00th=[41681], 00:17:15.714 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:15.714 | 99.99th=[42206] 00:17:15.714 bw ( KiB/s): min= 96, max= 2944, per=13.90%, avg=1102.40, stdev=1397.92, samples=5 00:17:15.714 iops : min= 24, max= 736, avg=275.60, stdev=349.48, samples=5 00:17:15.714 lat (usec) : 500=0.38%, 750=5.63%, 1000=23.03% 00:17:15.714 lat (msec) : 2=65.21%, 50=5.63% 00:17:15.714 cpu : usr=0.44%, sys=0.73%, ctx=799, majf=0, minf=1 00:17:15.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.714 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.714 issued rwts: total=799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.714 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2756788: Mon Jul 15 22:14:40 2024 00:17:15.714 read: IOPS=1047, BW=4187KiB/s (4288kB/s)(10.6MiB/2586msec) 00:17:15.714 slat (nsec): min=6520, max=61264, avg=23835.30, stdev=4189.23 00:17:15.714 clat (usec): min=429, max=41978, avg=917.13, stdev=1400.74 00:17:15.714 lat (usec): min=453, max=42004, avg=940.97, stdev=1400.63 00:17:15.714 clat percentiles (usec): 00:17:15.714 | 1.00th=[ 570], 5.00th=[ 685], 10.00th=[ 717], 20.00th=[ 775], 00:17:15.714 | 30.00th=[ 816], 40.00th=[ 857], 50.00th=[ 889], 60.00th=[ 914], 00:17:15.714 | 70.00th=[ 938], 80.00th=[ 955], 90.00th=[ 979], 95.00th=[ 1004], 00:17:15.714 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[41681], 99.95th=[42206], 00:17:15.714 | 99.99th=[42206] 00:17:15.714 bw ( KiB/s): min= 3560, max= 4536, per=53.24%, avg=4220.80, stdev=402.30, samples=5 00:17:15.714 iops : min= 890, max= 1134, avg=1055.20, stdev=100.57, samples=5 00:17:15.714 lat (usec) : 500=0.37%, 750=15.84%, 1000=78.06% 00:17:15.714 lat (msec) : 2=5.54%, 20=0.04%, 50=0.11% 00:17:15.714 cpu : usr=1.04%, sys=3.02%, ctx=2709, majf=0, minf=2 00:17:15.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.714 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.714 issued rwts: total=2708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.714 00:17:15.714 Run status group 0 (all jobs): 00:17:15.714 READ: bw=7926KiB/s (8116kB/s), 648KiB/s-4187KiB/s (663kB/s-4288kB/s), io=23.7MiB (24.9MB), run=2586-3068msec 00:17:15.714 00:17:15.714 Disk stats (read/write): 00:17:15.714 nvme0n1: ios=507/0, merge=0/0, ticks=3871/0, in_queue=3871, util=99.63% 00:17:15.714 nvme0n2: ios=1970/0, merge=0/0, ticks=2684/0, in_queue=2684, util=95.33% 00:17:15.714 nvme0n3: ios=775/0, merge=0/0, ticks=2548/0, in_queue=2548, util=96.03% 00:17:15.714 nvme0n4: ios=2454/0, merge=0/0, ticks=2175/0, in_queue=2175, util=96.02% 00:17:15.714 22:14:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:15.714 22:14:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:15.975 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:15.975 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:15.975 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:15.975 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:16.236 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:16.236 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:16.497 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:16.497 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2756596 00:17:16.498 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:16.498 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:16.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.498 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:16.498 22:14:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:16.498 22:14:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:16.498 22:14:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:16.498 22:14:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:16.498 22:14:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:16.498 22:14:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:16.498 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:16.498 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:16.498 nvmf hotplug test: fio failed as expected 00:17:16.498 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:16.759 rmmod nvme_tcp 00:17:16.759 rmmod nvme_fabrics 00:17:16.759 rmmod nvme_keyring 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2753038 ']' 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2753038 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2753038 ']' 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2753038 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.759 22:14:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2753038 00:17:16.759 22:14:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:16.759 22:14:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:16.759 22:14:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2753038' 00:17:16.759 killing process with pid 2753038 00:17:16.759 22:14:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2753038 00:17:16.759 22:14:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2753038 00:17:17.020 22:14:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:17.020 22:14:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:17.020 22:14:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:17.020 22:14:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.020 22:14:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:17.020 22:14:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.020 22:14:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.020 22:14:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.973 22:14:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:18.973 00:17:18.973 real 0m27.875s 00:17:18.973 user 2m32.251s 00:17:18.973 sys 0m8.895s 00:17:18.973 22:14:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:18.973 22:14:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.973 ************************************ 00:17:18.973 END TEST nvmf_fio_target 00:17:18.973 ************************************ 00:17:18.973 22:14:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:18.973 22:14:44 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:18.973 22:14:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:18.973 22:14:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.973 22:14:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:19.235 ************************************ 00:17:19.235 START TEST nvmf_bdevio 00:17:19.235 ************************************ 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:19.235 * Looking for test storage... 00:17:19.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:19.235 22:14:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:27.374 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.374 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:27.375 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:27.375 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:27.375 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:27.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:17:27.375 00:17:27.375 --- 10.0.0.2 ping statistics --- 00:17:27.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.375 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:17:27.375 00:17:27.375 --- 10.0.0.1 ping statistics --- 00:17:27.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.375 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2761801 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2761801 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2761801 ']' 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:27.375 22:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:27.375 [2024-07-15 22:14:51.779909] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:17:27.375 [2024-07-15 22:14:51.779960] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.375 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.375 [2024-07-15 22:14:51.864745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:27.375 [2024-07-15 22:14:51.940313] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.375 [2024-07-15 22:14:51.940364] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.375 [2024-07-15 22:14:51.940372] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.375 [2024-07-15 22:14:51.940378] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.375 [2024-07-15 22:14:51.940385] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.375 [2024-07-15 22:14:51.940540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:27.375 [2024-07-15 22:14:51.940687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:27.375 [2024-07-15 22:14:51.940843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.375 [2024-07-15 22:14:51.940844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:27.375 [2024-07-15 22:14:52.615268] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:27.375 Malloc0 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.375 22:14:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.376 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.376 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:27.376 [2024-07-15 22:14:52.674555] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.376 22:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.376 22:14:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:27.376 22:14:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:27.376 22:14:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:27.376 22:14:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:27.376 22:14:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:27.376 22:14:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:27.376 { 00:17:27.376 "params": { 00:17:27.376 "name": "Nvme$subsystem", 00:17:27.376 "trtype": "$TEST_TRANSPORT", 00:17:27.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:27.376 "adrfam": "ipv4", 00:17:27.376 "trsvcid": "$NVMF_PORT", 00:17:27.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:27.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:27.376 "hdgst": ${hdgst:-false}, 00:17:27.376 "ddgst": ${ddgst:-false} 00:17:27.376 }, 00:17:27.376 "method": "bdev_nvme_attach_controller" 00:17:27.376 } 00:17:27.376 EOF 00:17:27.376 )") 00:17:27.376 22:14:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:27.376 22:14:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:27.376 22:14:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:27.376 22:14:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:27.376 "params": { 00:17:27.376 "name": "Nvme1", 00:17:27.376 "trtype": "tcp", 00:17:27.376 "traddr": "10.0.0.2", 00:17:27.376 "adrfam": "ipv4", 00:17:27.376 "trsvcid": "4420", 00:17:27.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:27.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:27.376 "hdgst": false, 00:17:27.376 "ddgst": false 00:17:27.376 }, 00:17:27.376 "method": "bdev_nvme_attach_controller" 00:17:27.376 }' 00:17:27.635 [2024-07-15 22:14:52.729172] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:17:27.635 [2024-07-15 22:14:52.729224] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2762092 ] 00:17:27.635 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.635 [2024-07-15 22:14:52.791641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:27.635 [2024-07-15 22:14:52.861140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.635 [2024-07-15 22:14:52.861203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.635 [2024-07-15 22:14:52.861424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.895 I/O targets: 00:17:27.895 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:27.895 00:17:27.895 00:17:27.895 CUnit - A unit testing framework for C - Version 2.1-3 00:17:27.895 http://cunit.sourceforge.net/ 00:17:27.895 00:17:27.895 00:17:27.895 Suite: bdevio tests on: Nvme1n1 00:17:27.895 Test: blockdev write read block ...passed 00:17:27.895 Test: blockdev write zeroes read block ...passed 00:17:27.895 Test: blockdev write zeroes read no split ...passed 00:17:27.895 Test: blockdev write zeroes read split ...passed 00:17:27.895 Test: blockdev write zeroes read split partial ...passed 00:17:27.895 Test: blockdev reset ...[2024-07-15 22:14:53.188370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:27.895 [2024-07-15 22:14:53.188438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216ace0 (9): Bad file descriptor 00:17:28.154 [2024-07-15 22:14:53.324959] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:28.154 passed 00:17:28.154 Test: blockdev write read 8 blocks ...passed 00:17:28.154 Test: blockdev write read size > 128k ...passed 00:17:28.154 Test: blockdev write read invalid size ...passed 00:17:28.154 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:28.154 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:28.154 Test: blockdev write read max offset ...passed 00:17:28.154 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:28.154 Test: blockdev writev readv 8 blocks ...passed 00:17:28.154 Test: blockdev writev readv 30 x 1block ...passed 00:17:28.414 Test: blockdev writev readv block ...passed 00:17:28.414 Test: blockdev writev readv size > 128k ...passed 00:17:28.414 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:28.414 Test: blockdev comparev and writev ...[2024-07-15 22:14:53.513483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.414 [2024-07-15 22:14:53.513509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.414 [2024-07-15 22:14:53.513521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.414 [2024-07-15 22:14:53.513527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.414 [2024-07-15 22:14:53.514115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.414 [2024-07-15 22:14:53.514127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.414 [2024-07-15 22:14:53.514137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.414 [2024-07-15 22:14:53.514143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.414 [2024-07-15 22:14:53.514739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.414 [2024-07-15 22:14:53.514746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.414 [2024-07-15 22:14:53.514755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.414 [2024-07-15 22:14:53.514761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.414 [2024-07-15 22:14:53.515306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.414 [2024-07-15 22:14:53.515314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.414 [2024-07-15 22:14:53.515323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.414 [2024-07-15 22:14:53.515329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.414 passed 00:17:28.414 Test: blockdev nvme passthru rw ...passed 00:17:28.414 Test: blockdev nvme passthru vendor specific ...[2024-07-15 22:14:53.600194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:28.414 [2024-07-15 22:14:53.600206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.415 [2024-07-15 22:14:53.600702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:28.415 [2024-07-15 22:14:53.600708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.415 [2024-07-15 22:14:53.601055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:28.415 [2024-07-15 22:14:53.601061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.415 [2024-07-15 22:14:53.601406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:28.415 [2024-07-15 22:14:53.601413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.415 passed 00:17:28.415 Test: blockdev nvme admin passthru ...passed 00:17:28.415 Test: blockdev copy ...passed 00:17:28.415 00:17:28.415 Run Summary: Type Total Ran Passed Failed Inactive 00:17:28.415 suites 1 1 n/a 0 0 00:17:28.415 tests 23 23 23 0 0 00:17:28.415 asserts 152 152 152 0 n/a 00:17:28.415 00:17:28.415 Elapsed time = 1.319 seconds 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:28.693 rmmod nvme_tcp 00:17:28.693 rmmod nvme_fabrics 00:17:28.693 rmmod nvme_keyring 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2761801 ']' 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2761801 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2761801 ']' 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2761801 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2761801 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2761801' 00:17:28.693 killing process with pid 2761801 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2761801 00:17:28.693 22:14:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2761801 00:17:28.955 22:14:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:28.955 22:14:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:28.955 22:14:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:28.955 22:14:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:28.955 22:14:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:28.955 22:14:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.955 22:14:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.955 22:14:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.870 22:14:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:30.870 00:17:30.870 real 0m11.875s 00:17:30.870 user 0m12.736s 00:17:30.870 sys 0m5.945s 00:17:30.870 22:14:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:30.870 22:14:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:30.870 ************************************ 00:17:30.870 END TEST nvmf_bdevio 00:17:30.870 ************************************ 00:17:31.132 22:14:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:31.132 22:14:56 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:31.132 22:14:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:31.132 22:14:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:31.132 22:14:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:31.132 ************************************ 00:17:31.132 START TEST nvmf_auth_target 00:17:31.132 ************************************ 00:17:31.132 22:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:31.132 * Looking for test storage... 00:17:31.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.132 22:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.132 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:31.132 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.132 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.132 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.132 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.132 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.132 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.132 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:31.133 22:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:39.281 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:39.281 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:39.281 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:39.281 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:39.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.733 ms 00:17:39.281 00:17:39.281 --- 10.0.0.2 ping statistics --- 00:17:39.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.281 rtt min/avg/max/mdev = 0.733/0.733/0.733/0.000 ms 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:17:39.281 00:17:39.281 --- 10.0.0.1 ping statistics --- 00:17:39.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.281 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:39.281 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2766588 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2766588 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2766588 ']' 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.282 22:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2766639 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0a042ba49ba9fae97beda81ca5e0264740c73a13638d7918 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.UVQ 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0a042ba49ba9fae97beda81ca5e0264740c73a13638d7918 0 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0a042ba49ba9fae97beda81ca5e0264740c73a13638d7918 0 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0a042ba49ba9fae97beda81ca5e0264740c73a13638d7918 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.UVQ 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.UVQ 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.UVQ 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cb2afe4450920980f3f30ff863ad5cd7f4ffd8ed12f53ed3b77bf73fa85df412 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.EhW 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cb2afe4450920980f3f30ff863ad5cd7f4ffd8ed12f53ed3b77bf73fa85df412 3 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cb2afe4450920980f3f30ff863ad5cd7f4ffd8ed12f53ed3b77bf73fa85df412 3 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cb2afe4450920980f3f30ff863ad5cd7f4ffd8ed12f53ed3b77bf73fa85df412 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.EhW 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.EhW 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.EhW 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f528c3ef25a71faa7e2189a718ba4762 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.pxd 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f528c3ef25a71faa7e2189a718ba4762 1 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f528c3ef25a71faa7e2189a718ba4762 1 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f528c3ef25a71faa7e2189a718ba4762 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.pxd 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.pxd 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.pxd 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8276db4d979eeb0d431c24adb6fcc778ed853264668b6ea1 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.R6C 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8276db4d979eeb0d431c24adb6fcc778ed853264668b6ea1 2 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8276db4d979eeb0d431c24adb6fcc778ed853264668b6ea1 2 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8276db4d979eeb0d431c24adb6fcc778ed853264668b6ea1 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.R6C 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.R6C 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.R6C 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:39.282 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:39.283 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:39.283 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:39.283 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:39.283 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:39.283 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d04878cc5b5cc54740807a8606ca5583383a38bb71cd94e9 00:17:39.283 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:39.283 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.f1Z 00:17:39.283 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d04878cc5b5cc54740807a8606ca5583383a38bb71cd94e9 2 00:17:39.283 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d04878cc5b5cc54740807a8606ca5583383a38bb71cd94e9 2 00:17:39.283 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:39.283 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:39.283 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d04878cc5b5cc54740807a8606ca5583383a38bb71cd94e9 00:17:39.283 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:39.283 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.f1Z 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.f1Z 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.f1Z 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=48264c78acf254278888f0c243c6808d 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.eSx 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 48264c78acf254278888f0c243c6808d 1 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 48264c78acf254278888f0c243c6808d 1 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=48264c78acf254278888f0c243c6808d 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.eSx 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.eSx 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.eSx 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=982c8b47459bff2e77ae7df0724d2097e1b594bf74b9cf424317bcd60ff44e8a 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.0tk 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 982c8b47459bff2e77ae7df0724d2097e1b594bf74b9cf424317bcd60ff44e8a 3 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 982c8b47459bff2e77ae7df0724d2097e1b594bf74b9cf424317bcd60ff44e8a 3 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=982c8b47459bff2e77ae7df0724d2097e1b594bf74b9cf424317bcd60ff44e8a 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.0tk 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.0tk 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.0tk 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2766588 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2766588 ']' 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.544 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.806 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.806 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:39.806 22:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2766639 /var/tmp/host.sock 00:17:39.806 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2766639 ']' 00:17:39.806 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:39.806 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.806 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:39.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:39.806 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.806 22:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.806 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.806 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:39.806 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:39.806 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.806 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.806 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.806 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:39.806 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UVQ 00:17:39.806 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.806 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.806 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.806 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.UVQ 00:17:39.806 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.UVQ 00:17:40.067 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.EhW ]] 00:17:40.067 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EhW 00:17:40.067 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.067 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.067 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.067 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EhW 00:17:40.067 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EhW 00:17:40.328 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:40.328 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.pxd 00:17:40.328 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.328 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.328 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.328 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.pxd 00:17:40.328 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.pxd 00:17:40.328 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.R6C ]] 00:17:40.590 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.R6C 00:17:40.590 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.590 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.590 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.590 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.R6C 00:17:40.590 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.R6C 00:17:40.590 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:40.590 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.f1Z 00:17:40.590 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.590 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.590 22:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.590 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.f1Z 00:17:40.590 22:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.f1Z 00:17:40.851 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.eSx ]] 00:17:40.851 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eSx 00:17:40.851 22:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.851 22:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.851 22:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.851 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eSx 00:17:40.851 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eSx 00:17:41.112 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:41.112 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.0tk 00:17:41.112 22:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.112 22:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.112 22:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.112 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.0tk 00:17:41.112 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.0tk 00:17:41.112 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:41.112 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:41.112 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.113 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.113 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.113 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.373 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:41.374 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.374 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.374 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:41.374 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:41.374 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.374 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.374 22:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.374 22:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.374 22:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.374 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.374 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.634 00:17:41.634 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.634 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.634 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.634 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.634 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.634 22:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.634 22:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.634 22:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.634 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.634 { 00:17:41.634 "cntlid": 1, 00:17:41.634 "qid": 0, 00:17:41.634 "state": "enabled", 00:17:41.634 "thread": "nvmf_tgt_poll_group_000", 00:17:41.634 "listen_address": { 00:17:41.634 "trtype": "TCP", 00:17:41.634 "adrfam": "IPv4", 00:17:41.634 "traddr": "10.0.0.2", 00:17:41.634 "trsvcid": "4420" 00:17:41.634 }, 00:17:41.634 "peer_address": { 00:17:41.634 "trtype": "TCP", 00:17:41.634 "adrfam": "IPv4", 00:17:41.634 "traddr": "10.0.0.1", 00:17:41.634 "trsvcid": "35538" 00:17:41.634 }, 00:17:41.634 "auth": { 00:17:41.634 "state": "completed", 00:17:41.634 "digest": "sha256", 00:17:41.634 "dhgroup": "null" 00:17:41.634 } 00:17:41.634 } 00:17:41.634 ]' 00:17:41.634 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.895 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.895 22:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.895 22:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:41.895 22:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.895 22:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.895 22:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.895 22:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.156 22:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:17:42.727 22:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.727 22:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.727 22:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.727 22:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.727 22:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.727 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.727 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:42.727 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:42.988 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:42.988 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.988 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:42.988 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:42.988 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:42.988 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.988 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.988 22:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.988 22:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.988 22:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.988 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.988 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.249 00:17:43.249 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.249 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.249 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.509 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.509 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.509 22:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.509 22:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.509 22:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.509 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.509 { 00:17:43.509 "cntlid": 3, 00:17:43.509 "qid": 0, 00:17:43.509 "state": "enabled", 00:17:43.509 "thread": "nvmf_tgt_poll_group_000", 00:17:43.509 "listen_address": { 00:17:43.509 "trtype": "TCP", 00:17:43.509 "adrfam": "IPv4", 00:17:43.509 "traddr": "10.0.0.2", 00:17:43.509 "trsvcid": "4420" 00:17:43.509 }, 00:17:43.509 "peer_address": { 00:17:43.509 "trtype": "TCP", 00:17:43.509 "adrfam": "IPv4", 00:17:43.509 "traddr": "10.0.0.1", 00:17:43.509 "trsvcid": "35784" 00:17:43.509 }, 00:17:43.509 "auth": { 00:17:43.509 "state": "completed", 00:17:43.509 "digest": "sha256", 00:17:43.509 "dhgroup": "null" 00:17:43.509 } 00:17:43.509 } 00:17:43.509 ]' 00:17:43.509 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.509 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.509 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.509 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:43.509 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.509 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.509 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.509 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.769 22:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:17:44.376 22:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.376 22:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.376 22:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.376 22:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.376 22:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.376 22:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.376 22:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:44.376 22:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:44.637 22:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:44.637 22:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.637 22:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.637 22:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:44.637 22:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:44.637 22:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.637 22:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.637 22:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.637 22:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.637 22:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.637 22:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.637 22:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.898 00:17:44.898 22:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.898 22:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.898 22:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.898 22:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.898 22:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.898 22:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.898 22:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.898 22:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.898 22:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.898 { 00:17:44.898 "cntlid": 5, 00:17:44.898 "qid": 0, 00:17:44.898 "state": "enabled", 00:17:44.898 "thread": "nvmf_tgt_poll_group_000", 00:17:44.898 "listen_address": { 00:17:44.898 "trtype": "TCP", 00:17:44.898 "adrfam": "IPv4", 00:17:44.898 "traddr": "10.0.0.2", 00:17:44.898 "trsvcid": "4420" 00:17:44.898 }, 00:17:44.898 "peer_address": { 00:17:44.898 "trtype": "TCP", 00:17:44.898 "adrfam": "IPv4", 00:17:44.898 "traddr": "10.0.0.1", 00:17:44.898 "trsvcid": "35816" 00:17:44.898 }, 00:17:44.898 "auth": { 00:17:44.898 "state": "completed", 00:17:44.898 "digest": "sha256", 00:17:44.898 "dhgroup": "null" 00:17:44.898 } 00:17:44.898 } 00:17:44.898 ]' 00:17:44.898 22:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.160 22:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.160 22:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.160 22:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:45.160 22:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.160 22:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.160 22:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.160 22:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.420 22:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:17:45.992 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.992 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.992 22:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.992 22:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.992 22:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.992 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.992 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:45.992 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:46.253 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:46.253 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.253 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:46.253 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:46.253 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:46.253 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.253 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:46.253 22:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.253 22:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.253 22:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.253 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.253 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.514 00:17:46.514 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.514 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.514 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.514 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.514 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.514 22:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.514 22:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.774 22:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.774 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.774 { 00:17:46.774 "cntlid": 7, 00:17:46.774 "qid": 0, 00:17:46.774 "state": "enabled", 00:17:46.774 "thread": "nvmf_tgt_poll_group_000", 00:17:46.774 "listen_address": { 00:17:46.774 "trtype": "TCP", 00:17:46.774 "adrfam": "IPv4", 00:17:46.774 "traddr": "10.0.0.2", 00:17:46.774 "trsvcid": "4420" 00:17:46.774 }, 00:17:46.774 "peer_address": { 00:17:46.775 "trtype": "TCP", 00:17:46.775 "adrfam": "IPv4", 00:17:46.775 "traddr": "10.0.0.1", 00:17:46.775 "trsvcid": "35846" 00:17:46.775 }, 00:17:46.775 "auth": { 00:17:46.775 "state": "completed", 00:17:46.775 "digest": "sha256", 00:17:46.775 "dhgroup": "null" 00:17:46.775 } 00:17:46.775 } 00:17:46.775 ]' 00:17:46.775 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.775 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.775 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.775 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:46.775 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.775 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.775 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.775 22:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.036 22:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:17:47.607 22:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.607 22:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.607 22:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.607 22:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.607 22:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.607 22:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.607 22:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.607 22:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:47.607 22:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:47.867 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:47.867 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.867 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.867 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:47.867 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:47.867 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.867 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.867 22:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.867 22:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.867 22:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.867 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.867 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.128 00:17:48.128 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.128 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.128 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.388 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.388 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.388 22:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.388 22:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.388 22:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.388 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.388 { 00:17:48.388 "cntlid": 9, 00:17:48.388 "qid": 0, 00:17:48.388 "state": "enabled", 00:17:48.388 "thread": "nvmf_tgt_poll_group_000", 00:17:48.388 "listen_address": { 00:17:48.388 "trtype": "TCP", 00:17:48.388 "adrfam": "IPv4", 00:17:48.388 "traddr": "10.0.0.2", 00:17:48.388 "trsvcid": "4420" 00:17:48.388 }, 00:17:48.388 "peer_address": { 00:17:48.388 "trtype": "TCP", 00:17:48.388 "adrfam": "IPv4", 00:17:48.388 "traddr": "10.0.0.1", 00:17:48.388 "trsvcid": "35882" 00:17:48.388 }, 00:17:48.388 "auth": { 00:17:48.388 "state": "completed", 00:17:48.388 "digest": "sha256", 00:17:48.388 "dhgroup": "ffdhe2048" 00:17:48.388 } 00:17:48.388 } 00:17:48.388 ]' 00:17:48.388 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.388 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.388 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.388 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.388 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.388 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.388 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.388 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.648 22:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:17:49.589 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.589 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.589 22:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.589 22:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.589 22:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.589 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.589 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:49.590 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:49.590 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:49.590 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.590 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.590 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:49.590 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:49.590 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.590 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.590 22:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.590 22:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.590 22:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.590 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.590 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.851 00:17:49.851 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.851 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.851 22:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.851 22:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.851 22:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.851 22:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.851 22:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.851 22:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.851 22:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.851 { 00:17:49.851 "cntlid": 11, 00:17:49.851 "qid": 0, 00:17:49.851 "state": "enabled", 00:17:49.851 "thread": "nvmf_tgt_poll_group_000", 00:17:49.851 "listen_address": { 00:17:49.851 "trtype": "TCP", 00:17:49.851 "adrfam": "IPv4", 00:17:49.851 "traddr": "10.0.0.2", 00:17:49.851 "trsvcid": "4420" 00:17:49.851 }, 00:17:49.851 "peer_address": { 00:17:49.851 "trtype": "TCP", 00:17:49.851 "adrfam": "IPv4", 00:17:49.851 "traddr": "10.0.0.1", 00:17:49.851 "trsvcid": "35900" 00:17:49.851 }, 00:17:49.851 "auth": { 00:17:49.851 "state": "completed", 00:17:49.851 "digest": "sha256", 00:17:49.851 "dhgroup": "ffdhe2048" 00:17:49.851 } 00:17:49.851 } 00:17:49.851 ]' 00:17:49.851 22:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.111 22:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.111 22:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.111 22:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.111 22:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.111 22:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.111 22:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.111 22:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.372 22:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:17:50.942 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.943 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.943 22:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.943 22:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.943 22:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.943 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.943 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:50.943 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:51.202 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:51.202 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.202 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:51.202 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:51.202 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:51.202 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.202 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.202 22:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.202 22:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.202 22:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.202 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.202 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.461 00:17:51.461 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.461 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.461 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.461 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.731 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.731 22:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.731 22:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.731 22:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.731 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.731 { 00:17:51.731 "cntlid": 13, 00:17:51.731 "qid": 0, 00:17:51.731 "state": "enabled", 00:17:51.731 "thread": "nvmf_tgt_poll_group_000", 00:17:51.731 "listen_address": { 00:17:51.731 "trtype": "TCP", 00:17:51.731 "adrfam": "IPv4", 00:17:51.731 "traddr": "10.0.0.2", 00:17:51.731 "trsvcid": "4420" 00:17:51.731 }, 00:17:51.731 "peer_address": { 00:17:51.731 "trtype": "TCP", 00:17:51.731 "adrfam": "IPv4", 00:17:51.731 "traddr": "10.0.0.1", 00:17:51.731 "trsvcid": "35926" 00:17:51.731 }, 00:17:51.731 "auth": { 00:17:51.731 "state": "completed", 00:17:51.731 "digest": "sha256", 00:17:51.731 "dhgroup": "ffdhe2048" 00:17:51.731 } 00:17:51.731 } 00:17:51.731 ]' 00:17:51.731 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.731 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.731 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.731 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:51.731 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.731 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.731 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.731 22:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.990 22:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:17:52.559 22:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.559 22:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.559 22:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.559 22:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.559 22:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.559 22:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.559 22:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:52.559 22:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:52.819 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:52.819 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.819 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.819 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:52.819 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:52.820 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.820 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:52.820 22:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.820 22:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.820 22:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.820 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.820 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.080 00:17:53.080 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.080 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.080 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.341 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.341 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.341 22:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.341 22:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.341 22:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.341 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.341 { 00:17:53.341 "cntlid": 15, 00:17:53.341 "qid": 0, 00:17:53.341 "state": "enabled", 00:17:53.341 "thread": "nvmf_tgt_poll_group_000", 00:17:53.341 "listen_address": { 00:17:53.341 "trtype": "TCP", 00:17:53.341 "adrfam": "IPv4", 00:17:53.341 "traddr": "10.0.0.2", 00:17:53.341 "trsvcid": "4420" 00:17:53.341 }, 00:17:53.341 "peer_address": { 00:17:53.341 "trtype": "TCP", 00:17:53.341 "adrfam": "IPv4", 00:17:53.341 "traddr": "10.0.0.1", 00:17:53.341 "trsvcid": "33742" 00:17:53.341 }, 00:17:53.341 "auth": { 00:17:53.341 "state": "completed", 00:17:53.341 "digest": "sha256", 00:17:53.341 "dhgroup": "ffdhe2048" 00:17:53.341 } 00:17:53.341 } 00:17:53.341 ]' 00:17:53.341 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.341 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.341 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.341 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:53.341 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.341 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.341 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.341 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.602 22:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:17:54.174 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.174 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.174 22:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.174 22:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.174 22:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.174 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.174 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.174 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:54.174 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:54.434 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:54.434 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.434 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.434 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:54.434 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:54.434 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.434 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.434 22:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.434 22:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.434 22:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.434 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.434 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.692 00:17:54.692 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.692 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.692 22:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.952 22:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.952 22:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.952 22:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.952 22:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.952 22:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.952 22:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.952 { 00:17:54.952 "cntlid": 17, 00:17:54.952 "qid": 0, 00:17:54.952 "state": "enabled", 00:17:54.952 "thread": "nvmf_tgt_poll_group_000", 00:17:54.952 "listen_address": { 00:17:54.952 "trtype": "TCP", 00:17:54.952 "adrfam": "IPv4", 00:17:54.952 "traddr": "10.0.0.2", 00:17:54.952 "trsvcid": "4420" 00:17:54.952 }, 00:17:54.952 "peer_address": { 00:17:54.952 "trtype": "TCP", 00:17:54.952 "adrfam": "IPv4", 00:17:54.952 "traddr": "10.0.0.1", 00:17:54.952 "trsvcid": "33772" 00:17:54.952 }, 00:17:54.952 "auth": { 00:17:54.952 "state": "completed", 00:17:54.952 "digest": "sha256", 00:17:54.952 "dhgroup": "ffdhe3072" 00:17:54.952 } 00:17:54.952 } 00:17:54.952 ]' 00:17:54.952 22:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.952 22:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.953 22:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.953 22:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.953 22:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.953 22:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.953 22:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.953 22:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.213 22:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:17:56.154 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.154 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.154 22:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.154 22:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.154 22:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.154 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.154 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:56.154 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:56.154 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:56.154 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.154 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.154 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:56.154 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:56.154 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.154 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.155 22:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.155 22:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.155 22:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.155 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.155 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.415 00:17:56.415 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.415 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.415 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.415 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.676 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.676 22:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.676 22:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.676 22:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.676 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.676 { 00:17:56.676 "cntlid": 19, 00:17:56.676 "qid": 0, 00:17:56.676 "state": "enabled", 00:17:56.676 "thread": "nvmf_tgt_poll_group_000", 00:17:56.676 "listen_address": { 00:17:56.676 "trtype": "TCP", 00:17:56.676 "adrfam": "IPv4", 00:17:56.676 "traddr": "10.0.0.2", 00:17:56.676 "trsvcid": "4420" 00:17:56.676 }, 00:17:56.676 "peer_address": { 00:17:56.676 "trtype": "TCP", 00:17:56.676 "adrfam": "IPv4", 00:17:56.676 "traddr": "10.0.0.1", 00:17:56.676 "trsvcid": "33804" 00:17:56.676 }, 00:17:56.676 "auth": { 00:17:56.676 "state": "completed", 00:17:56.676 "digest": "sha256", 00:17:56.676 "dhgroup": "ffdhe3072" 00:17:56.676 } 00:17:56.676 } 00:17:56.676 ]' 00:17:56.676 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.676 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.676 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.676 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.676 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.676 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.676 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.676 22:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.937 22:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:17:57.507 22:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.507 22:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.507 22:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.507 22:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.507 22:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.507 22:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.507 22:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:57.507 22:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:57.768 22:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:57.768 22:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.768 22:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:57.768 22:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:57.768 22:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:57.768 22:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.768 22:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.768 22:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.768 22:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.768 22:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.768 22:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.768 22:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.029 00:17:58.029 22:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.029 22:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.029 22:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.299 22:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.299 22:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.299 22:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.299 22:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.299 22:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.299 22:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.299 { 00:17:58.299 "cntlid": 21, 00:17:58.299 "qid": 0, 00:17:58.299 "state": "enabled", 00:17:58.299 "thread": "nvmf_tgt_poll_group_000", 00:17:58.299 "listen_address": { 00:17:58.299 "trtype": "TCP", 00:17:58.299 "adrfam": "IPv4", 00:17:58.299 "traddr": "10.0.0.2", 00:17:58.299 "trsvcid": "4420" 00:17:58.299 }, 00:17:58.299 "peer_address": { 00:17:58.299 "trtype": "TCP", 00:17:58.299 "adrfam": "IPv4", 00:17:58.299 "traddr": "10.0.0.1", 00:17:58.299 "trsvcid": "33836" 00:17:58.299 }, 00:17:58.299 "auth": { 00:17:58.299 "state": "completed", 00:17:58.299 "digest": "sha256", 00:17:58.299 "dhgroup": "ffdhe3072" 00:17:58.299 } 00:17:58.299 } 00:17:58.299 ]' 00:17:58.299 22:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.299 22:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.299 22:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.299 22:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:58.299 22:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.299 22:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.299 22:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.299 22:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.593 22:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:17:59.167 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.167 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.167 22:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.167 22:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.167 22:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.167 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.167 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:59.167 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:59.427 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:59.427 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.427 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.427 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:59.427 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:59.427 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.427 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:59.427 22:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.427 22:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.427 22:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.427 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.427 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.687 00:17:59.687 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.688 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.688 22:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.947 22:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.948 22:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.948 22:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.948 22:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.948 22:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.948 22:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.948 { 00:17:59.948 "cntlid": 23, 00:17:59.948 "qid": 0, 00:17:59.948 "state": "enabled", 00:17:59.948 "thread": "nvmf_tgt_poll_group_000", 00:17:59.948 "listen_address": { 00:17:59.948 "trtype": "TCP", 00:17:59.948 "adrfam": "IPv4", 00:17:59.948 "traddr": "10.0.0.2", 00:17:59.948 "trsvcid": "4420" 00:17:59.948 }, 00:17:59.948 "peer_address": { 00:17:59.948 "trtype": "TCP", 00:17:59.948 "adrfam": "IPv4", 00:17:59.948 "traddr": "10.0.0.1", 00:17:59.948 "trsvcid": "33848" 00:17:59.948 }, 00:17:59.948 "auth": { 00:17:59.948 "state": "completed", 00:17:59.948 "digest": "sha256", 00:17:59.948 "dhgroup": "ffdhe3072" 00:17:59.948 } 00:17:59.948 } 00:17:59.948 ]' 00:17:59.948 22:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.948 22:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.948 22:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.948 22:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.948 22:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.948 22:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.948 22:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.948 22:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.209 22:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.151 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.412 00:18:01.412 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.412 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.412 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.673 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.673 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.673 22:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.673 22:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.673 22:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.673 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.673 { 00:18:01.673 "cntlid": 25, 00:18:01.673 "qid": 0, 00:18:01.673 "state": "enabled", 00:18:01.673 "thread": "nvmf_tgt_poll_group_000", 00:18:01.673 "listen_address": { 00:18:01.673 "trtype": "TCP", 00:18:01.673 "adrfam": "IPv4", 00:18:01.673 "traddr": "10.0.0.2", 00:18:01.673 "trsvcid": "4420" 00:18:01.673 }, 00:18:01.673 "peer_address": { 00:18:01.673 "trtype": "TCP", 00:18:01.673 "adrfam": "IPv4", 00:18:01.673 "traddr": "10.0.0.1", 00:18:01.673 "trsvcid": "33886" 00:18:01.673 }, 00:18:01.673 "auth": { 00:18:01.673 "state": "completed", 00:18:01.673 "digest": "sha256", 00:18:01.673 "dhgroup": "ffdhe4096" 00:18:01.673 } 00:18:01.673 } 00:18:01.673 ]' 00:18:01.673 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.673 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.673 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.673 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:01.673 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.673 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.673 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.673 22:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.934 22:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:18:02.506 22:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.767 22:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.767 22:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.767 22:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.767 22:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.767 22:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.767 22:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:02.767 22:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:02.767 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:02.767 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.767 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.767 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:02.767 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:02.767 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.767 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.767 22:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.767 22:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.767 22:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.768 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.768 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.028 00:18:03.028 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.028 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.028 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.288 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.288 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.288 22:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.288 22:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.288 22:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.288 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.289 { 00:18:03.289 "cntlid": 27, 00:18:03.289 "qid": 0, 00:18:03.289 "state": "enabled", 00:18:03.289 "thread": "nvmf_tgt_poll_group_000", 00:18:03.289 "listen_address": { 00:18:03.289 "trtype": "TCP", 00:18:03.289 "adrfam": "IPv4", 00:18:03.289 "traddr": "10.0.0.2", 00:18:03.289 "trsvcid": "4420" 00:18:03.289 }, 00:18:03.289 "peer_address": { 00:18:03.289 "trtype": "TCP", 00:18:03.289 "adrfam": "IPv4", 00:18:03.289 "traddr": "10.0.0.1", 00:18:03.289 "trsvcid": "44488" 00:18:03.289 }, 00:18:03.289 "auth": { 00:18:03.289 "state": "completed", 00:18:03.289 "digest": "sha256", 00:18:03.289 "dhgroup": "ffdhe4096" 00:18:03.289 } 00:18:03.289 } 00:18:03.289 ]' 00:18:03.289 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.289 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.289 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.289 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.289 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.549 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.549 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.549 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.549 22:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.492 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.753 00:18:04.753 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.753 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.753 22:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.015 22:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.015 22:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.015 22:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.015 22:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.015 22:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.015 22:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.015 { 00:18:05.015 "cntlid": 29, 00:18:05.015 "qid": 0, 00:18:05.015 "state": "enabled", 00:18:05.015 "thread": "nvmf_tgt_poll_group_000", 00:18:05.015 "listen_address": { 00:18:05.015 "trtype": "TCP", 00:18:05.015 "adrfam": "IPv4", 00:18:05.015 "traddr": "10.0.0.2", 00:18:05.015 "trsvcid": "4420" 00:18:05.015 }, 00:18:05.015 "peer_address": { 00:18:05.015 "trtype": "TCP", 00:18:05.015 "adrfam": "IPv4", 00:18:05.015 "traddr": "10.0.0.1", 00:18:05.015 "trsvcid": "44512" 00:18:05.015 }, 00:18:05.015 "auth": { 00:18:05.015 "state": "completed", 00:18:05.015 "digest": "sha256", 00:18:05.015 "dhgroup": "ffdhe4096" 00:18:05.015 } 00:18:05.015 } 00:18:05.015 ]' 00:18:05.015 22:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.015 22:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.015 22:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.015 22:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.015 22:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.015 22:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.015 22:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.015 22:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.277 22:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.220 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.481 00:18:06.481 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.481 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.481 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.481 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.742 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.742 22:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.742 22:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.742 22:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.742 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.742 { 00:18:06.742 "cntlid": 31, 00:18:06.742 "qid": 0, 00:18:06.742 "state": "enabled", 00:18:06.742 "thread": "nvmf_tgt_poll_group_000", 00:18:06.742 "listen_address": { 00:18:06.742 "trtype": "TCP", 00:18:06.742 "adrfam": "IPv4", 00:18:06.742 "traddr": "10.0.0.2", 00:18:06.742 "trsvcid": "4420" 00:18:06.742 }, 00:18:06.742 "peer_address": { 00:18:06.742 "trtype": "TCP", 00:18:06.742 "adrfam": "IPv4", 00:18:06.742 "traddr": "10.0.0.1", 00:18:06.742 "trsvcid": "44556" 00:18:06.742 }, 00:18:06.742 "auth": { 00:18:06.742 "state": "completed", 00:18:06.742 "digest": "sha256", 00:18:06.742 "dhgroup": "ffdhe4096" 00:18:06.742 } 00:18:06.742 } 00:18:06.742 ]' 00:18:06.742 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.742 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.742 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.742 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:06.742 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.742 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.742 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.742 22:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.003 22:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:18:07.575 22:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.575 22:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.575 22:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.575 22:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.575 22:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.575 22:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.575 22:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.575 22:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:07.575 22:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:07.837 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:07.837 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.837 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:07.837 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:07.837 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:07.837 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.837 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.837 22:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.837 22:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.837 22:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.837 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.837 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.099 00:18:08.099 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.100 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.100 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.361 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.361 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.361 22:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.361 22:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.361 22:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.361 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.361 { 00:18:08.361 "cntlid": 33, 00:18:08.361 "qid": 0, 00:18:08.361 "state": "enabled", 00:18:08.361 "thread": "nvmf_tgt_poll_group_000", 00:18:08.361 "listen_address": { 00:18:08.361 "trtype": "TCP", 00:18:08.361 "adrfam": "IPv4", 00:18:08.361 "traddr": "10.0.0.2", 00:18:08.361 "trsvcid": "4420" 00:18:08.361 }, 00:18:08.361 "peer_address": { 00:18:08.361 "trtype": "TCP", 00:18:08.361 "adrfam": "IPv4", 00:18:08.361 "traddr": "10.0.0.1", 00:18:08.361 "trsvcid": "44590" 00:18:08.361 }, 00:18:08.361 "auth": { 00:18:08.361 "state": "completed", 00:18:08.361 "digest": "sha256", 00:18:08.361 "dhgroup": "ffdhe6144" 00:18:08.361 } 00:18:08.361 } 00:18:08.361 ]' 00:18:08.361 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.361 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.361 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.361 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.361 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.622 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.622 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.622 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.622 22:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:18:09.564 22:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.564 22:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.564 22:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.564 22:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.564 22:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.564 22:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.564 22:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:09.564 22:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:09.564 22:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:09.564 22:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.564 22:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.564 22:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:09.564 22:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:09.564 22:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.564 22:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.565 22:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.565 22:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.565 22:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.565 22:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.565 22:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.136 00:18:10.136 22:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.136 22:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.136 22:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.136 22:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.136 22:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.136 22:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.136 22:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.136 22:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.136 22:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.136 { 00:18:10.136 "cntlid": 35, 00:18:10.136 "qid": 0, 00:18:10.136 "state": "enabled", 00:18:10.136 "thread": "nvmf_tgt_poll_group_000", 00:18:10.136 "listen_address": { 00:18:10.136 "trtype": "TCP", 00:18:10.136 "adrfam": "IPv4", 00:18:10.136 "traddr": "10.0.0.2", 00:18:10.136 "trsvcid": "4420" 00:18:10.136 }, 00:18:10.136 "peer_address": { 00:18:10.136 "trtype": "TCP", 00:18:10.136 "adrfam": "IPv4", 00:18:10.136 "traddr": "10.0.0.1", 00:18:10.136 "trsvcid": "44614" 00:18:10.136 }, 00:18:10.136 "auth": { 00:18:10.136 "state": "completed", 00:18:10.136 "digest": "sha256", 00:18:10.136 "dhgroup": "ffdhe6144" 00:18:10.136 } 00:18:10.136 } 00:18:10.136 ]' 00:18:10.136 22:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.136 22:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.136 22:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.136 22:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.136 22:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.397 22:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.397 22:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.397 22:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.397 22:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:18:11.339 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.339 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.339 22:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.339 22:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.339 22:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.339 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.339 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:11.339 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:11.339 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:11.339 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.339 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.339 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:11.339 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.340 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.340 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.340 22:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.340 22:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.340 22:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.340 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.340 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.910 00:18:11.910 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.910 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.910 22:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.910 22:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.910 22:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.910 22:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.910 22:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.910 22:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.910 22:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.910 { 00:18:11.910 "cntlid": 37, 00:18:11.910 "qid": 0, 00:18:11.910 "state": "enabled", 00:18:11.910 "thread": "nvmf_tgt_poll_group_000", 00:18:11.910 "listen_address": { 00:18:11.910 "trtype": "TCP", 00:18:11.910 "adrfam": "IPv4", 00:18:11.910 "traddr": "10.0.0.2", 00:18:11.910 "trsvcid": "4420" 00:18:11.910 }, 00:18:11.910 "peer_address": { 00:18:11.910 "trtype": "TCP", 00:18:11.910 "adrfam": "IPv4", 00:18:11.910 "traddr": "10.0.0.1", 00:18:11.910 "trsvcid": "44640" 00:18:11.910 }, 00:18:11.910 "auth": { 00:18:11.910 "state": "completed", 00:18:11.910 "digest": "sha256", 00:18:11.910 "dhgroup": "ffdhe6144" 00:18:11.910 } 00:18:11.910 } 00:18:11.910 ]' 00:18:11.910 22:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.910 22:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.910 22:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.910 22:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:11.910 22:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.170 22:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.170 22:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.170 22:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.170 22:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.146 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.406 00:18:13.686 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.686 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.686 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.686 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.686 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.686 22:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.686 22:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.686 22:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.686 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.686 { 00:18:13.686 "cntlid": 39, 00:18:13.686 "qid": 0, 00:18:13.686 "state": "enabled", 00:18:13.686 "thread": "nvmf_tgt_poll_group_000", 00:18:13.686 "listen_address": { 00:18:13.686 "trtype": "TCP", 00:18:13.686 "adrfam": "IPv4", 00:18:13.686 "traddr": "10.0.0.2", 00:18:13.686 "trsvcid": "4420" 00:18:13.686 }, 00:18:13.686 "peer_address": { 00:18:13.686 "trtype": "TCP", 00:18:13.686 "adrfam": "IPv4", 00:18:13.686 "traddr": "10.0.0.1", 00:18:13.686 "trsvcid": "46974" 00:18:13.686 }, 00:18:13.686 "auth": { 00:18:13.686 "state": "completed", 00:18:13.686 "digest": "sha256", 00:18:13.686 "dhgroup": "ffdhe6144" 00:18:13.686 } 00:18:13.686 } 00:18:13.686 ]' 00:18:13.686 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.686 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.686 22:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.954 22:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:13.954 22:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.954 22:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.954 22:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.954 22:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.954 22:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:18:14.894 22:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.894 22:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.894 22:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.894 22:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.894 22:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.894 22:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.894 22:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.894 22:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:14.894 22:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:14.894 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:14.894 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.894 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.894 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:14.894 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:14.894 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.894 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.894 22:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.894 22:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.894 22:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.894 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.894 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.465 00:18:15.465 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.465 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.465 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.726 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.726 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.726 22:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.726 22:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.726 22:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.726 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.726 { 00:18:15.726 "cntlid": 41, 00:18:15.726 "qid": 0, 00:18:15.726 "state": "enabled", 00:18:15.726 "thread": "nvmf_tgt_poll_group_000", 00:18:15.726 "listen_address": { 00:18:15.726 "trtype": "TCP", 00:18:15.726 "adrfam": "IPv4", 00:18:15.726 "traddr": "10.0.0.2", 00:18:15.726 "trsvcid": "4420" 00:18:15.726 }, 00:18:15.726 "peer_address": { 00:18:15.726 "trtype": "TCP", 00:18:15.726 "adrfam": "IPv4", 00:18:15.726 "traddr": "10.0.0.1", 00:18:15.726 "trsvcid": "47010" 00:18:15.726 }, 00:18:15.726 "auth": { 00:18:15.726 "state": "completed", 00:18:15.726 "digest": "sha256", 00:18:15.726 "dhgroup": "ffdhe8192" 00:18:15.726 } 00:18:15.726 } 00:18:15.726 ]' 00:18:15.726 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.726 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.726 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.726 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.726 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.726 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.726 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.726 22:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.986 22:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:18:16.555 22:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.555 22:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.555 22:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.555 22:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.815 22:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.815 22:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.815 22:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:16.815 22:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:16.815 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:16.815 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.815 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.815 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:16.815 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:16.815 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.815 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.815 22:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.815 22:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.815 22:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.815 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.815 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.386 00:18:17.386 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.386 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.386 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.647 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.647 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.647 22:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.647 22:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.647 22:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.647 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.647 { 00:18:17.647 "cntlid": 43, 00:18:17.647 "qid": 0, 00:18:17.647 "state": "enabled", 00:18:17.647 "thread": "nvmf_tgt_poll_group_000", 00:18:17.647 "listen_address": { 00:18:17.647 "trtype": "TCP", 00:18:17.647 "adrfam": "IPv4", 00:18:17.647 "traddr": "10.0.0.2", 00:18:17.647 "trsvcid": "4420" 00:18:17.647 }, 00:18:17.647 "peer_address": { 00:18:17.647 "trtype": "TCP", 00:18:17.647 "adrfam": "IPv4", 00:18:17.647 "traddr": "10.0.0.1", 00:18:17.647 "trsvcid": "47032" 00:18:17.647 }, 00:18:17.647 "auth": { 00:18:17.647 "state": "completed", 00:18:17.647 "digest": "sha256", 00:18:17.647 "dhgroup": "ffdhe8192" 00:18:17.647 } 00:18:17.647 } 00:18:17.647 ]' 00:18:17.647 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.647 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.647 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.647 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.647 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.647 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.647 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.647 22:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.908 22:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:18:18.848 22:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.848 22:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.848 22:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.848 22:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.848 22:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.848 22:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.848 22:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:18.848 22:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:18.848 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:18.848 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.848 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:18.848 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:18.848 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:18.848 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.848 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.848 22:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.848 22:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.848 22:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.848 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.848 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.420 00:18:19.420 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.420 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.420 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.420 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.420 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.420 22:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.420 22:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.420 22:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.681 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.681 { 00:18:19.681 "cntlid": 45, 00:18:19.681 "qid": 0, 00:18:19.681 "state": "enabled", 00:18:19.681 "thread": "nvmf_tgt_poll_group_000", 00:18:19.681 "listen_address": { 00:18:19.681 "trtype": "TCP", 00:18:19.681 "adrfam": "IPv4", 00:18:19.681 "traddr": "10.0.0.2", 00:18:19.681 "trsvcid": "4420" 00:18:19.681 }, 00:18:19.681 "peer_address": { 00:18:19.681 "trtype": "TCP", 00:18:19.681 "adrfam": "IPv4", 00:18:19.681 "traddr": "10.0.0.1", 00:18:19.681 "trsvcid": "47070" 00:18:19.681 }, 00:18:19.682 "auth": { 00:18:19.682 "state": "completed", 00:18:19.682 "digest": "sha256", 00:18:19.682 "dhgroup": "ffdhe8192" 00:18:19.682 } 00:18:19.682 } 00:18:19.682 ]' 00:18:19.682 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.682 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.682 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.682 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.682 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.682 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.682 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.682 22:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.941 22:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:18:20.511 22:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.511 22:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.511 22:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.511 22:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.511 22:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.511 22:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.511 22:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:20.511 22:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:20.771 22:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:20.771 22:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.771 22:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:20.771 22:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:20.771 22:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:20.771 22:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.771 22:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:20.771 22:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.771 22:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.771 22:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.771 22:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.771 22:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.341 00:18:21.341 22:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.341 22:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.341 22:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.341 22:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.341 22:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.341 22:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.341 22:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.602 22:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.602 22:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.602 { 00:18:21.602 "cntlid": 47, 00:18:21.602 "qid": 0, 00:18:21.602 "state": "enabled", 00:18:21.602 "thread": "nvmf_tgt_poll_group_000", 00:18:21.602 "listen_address": { 00:18:21.602 "trtype": "TCP", 00:18:21.602 "adrfam": "IPv4", 00:18:21.602 "traddr": "10.0.0.2", 00:18:21.602 "trsvcid": "4420" 00:18:21.602 }, 00:18:21.602 "peer_address": { 00:18:21.602 "trtype": "TCP", 00:18:21.602 "adrfam": "IPv4", 00:18:21.602 "traddr": "10.0.0.1", 00:18:21.602 "trsvcid": "47092" 00:18:21.602 }, 00:18:21.602 "auth": { 00:18:21.602 "state": "completed", 00:18:21.602 "digest": "sha256", 00:18:21.602 "dhgroup": "ffdhe8192" 00:18:21.602 } 00:18:21.602 } 00:18:21.602 ]' 00:18:21.602 22:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.602 22:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.602 22:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.602 22:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.602 22:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.602 22:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.602 22:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.602 22:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.862 22:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:18:22.432 22:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.432 22:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.432 22:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.432 22:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.432 22:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.432 22:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:22.432 22:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.432 22:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.432 22:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:22.432 22:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:22.693 22:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:22.693 22:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.693 22:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:22.693 22:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:22.693 22:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:22.693 22:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.693 22:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.693 22:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.693 22:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.693 22:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.693 22:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.693 22:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.953 00:18:22.954 22:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.954 22:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.954 22:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.954 22:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.954 22:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.954 22:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.954 22:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.954 22:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.954 22:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.954 { 00:18:22.954 "cntlid": 49, 00:18:22.954 "qid": 0, 00:18:22.954 "state": "enabled", 00:18:22.954 "thread": "nvmf_tgt_poll_group_000", 00:18:22.954 "listen_address": { 00:18:22.954 "trtype": "TCP", 00:18:22.954 "adrfam": "IPv4", 00:18:22.954 "traddr": "10.0.0.2", 00:18:22.954 "trsvcid": "4420" 00:18:22.954 }, 00:18:22.954 "peer_address": { 00:18:22.954 "trtype": "TCP", 00:18:22.954 "adrfam": "IPv4", 00:18:22.954 "traddr": "10.0.0.1", 00:18:22.954 "trsvcid": "33036" 00:18:22.954 }, 00:18:22.954 "auth": { 00:18:22.954 "state": "completed", 00:18:22.954 "digest": "sha384", 00:18:22.954 "dhgroup": "null" 00:18:22.954 } 00:18:22.954 } 00:18:22.954 ]' 00:18:23.214 22:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.214 22:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.214 22:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.214 22:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:23.214 22:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.214 22:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.214 22:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.214 22:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.475 22:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:18:24.047 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.047 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.047 22:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.047 22:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.047 22:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.047 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.047 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:24.047 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:24.308 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:24.308 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.308 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.308 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:24.308 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:24.308 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.308 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.308 22:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.308 22:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.308 22:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.308 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.308 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.569 00:18:24.570 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.570 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.570 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.570 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.570 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.570 22:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.570 22:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.570 22:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.570 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.570 { 00:18:24.570 "cntlid": 51, 00:18:24.570 "qid": 0, 00:18:24.570 "state": "enabled", 00:18:24.570 "thread": "nvmf_tgt_poll_group_000", 00:18:24.570 "listen_address": { 00:18:24.570 "trtype": "TCP", 00:18:24.570 "adrfam": "IPv4", 00:18:24.570 "traddr": "10.0.0.2", 00:18:24.570 "trsvcid": "4420" 00:18:24.570 }, 00:18:24.570 "peer_address": { 00:18:24.570 "trtype": "TCP", 00:18:24.570 "adrfam": "IPv4", 00:18:24.570 "traddr": "10.0.0.1", 00:18:24.570 "trsvcid": "33064" 00:18:24.570 }, 00:18:24.570 "auth": { 00:18:24.570 "state": "completed", 00:18:24.570 "digest": "sha384", 00:18:24.570 "dhgroup": "null" 00:18:24.570 } 00:18:24.570 } 00:18:24.570 ]' 00:18:24.570 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.830 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.830 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.830 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:24.830 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.830 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.830 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.830 22:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.830 22:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:18:25.773 22:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.773 22:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.773 22:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.773 22:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.773 22:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.773 22:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.773 22:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:25.773 22:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:25.773 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:25.773 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.773 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.773 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:25.773 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:25.773 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.773 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.773 22:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.773 22:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.034 22:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.034 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.034 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.034 00:18:26.034 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.034 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.034 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.294 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.294 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.294 22:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.294 22:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.294 22:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.294 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.294 { 00:18:26.294 "cntlid": 53, 00:18:26.294 "qid": 0, 00:18:26.294 "state": "enabled", 00:18:26.294 "thread": "nvmf_tgt_poll_group_000", 00:18:26.294 "listen_address": { 00:18:26.294 "trtype": "TCP", 00:18:26.294 "adrfam": "IPv4", 00:18:26.294 "traddr": "10.0.0.2", 00:18:26.294 "trsvcid": "4420" 00:18:26.294 }, 00:18:26.294 "peer_address": { 00:18:26.294 "trtype": "TCP", 00:18:26.294 "adrfam": "IPv4", 00:18:26.294 "traddr": "10.0.0.1", 00:18:26.294 "trsvcid": "33098" 00:18:26.294 }, 00:18:26.294 "auth": { 00:18:26.294 "state": "completed", 00:18:26.294 "digest": "sha384", 00:18:26.294 "dhgroup": "null" 00:18:26.294 } 00:18:26.294 } 00:18:26.294 ]' 00:18:26.294 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.294 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.294 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.294 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:26.294 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.555 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.555 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.555 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.555 22:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:18:27.496 22:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.496 22:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.496 22:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.496 22:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.496 22:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.496 22:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.496 22:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:27.496 22:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:27.757 22:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:27.757 22:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.757 22:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.757 22:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:27.757 22:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:27.757 22:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.757 22:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:27.757 22:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.757 22:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.757 22:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.757 22:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.757 22:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.757 00:18:27.757 22:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.757 22:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.757 22:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.018 22:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.018 22:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.018 22:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.018 22:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.018 22:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.018 22:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.018 { 00:18:28.018 "cntlid": 55, 00:18:28.018 "qid": 0, 00:18:28.018 "state": "enabled", 00:18:28.018 "thread": "nvmf_tgt_poll_group_000", 00:18:28.018 "listen_address": { 00:18:28.018 "trtype": "TCP", 00:18:28.018 "adrfam": "IPv4", 00:18:28.018 "traddr": "10.0.0.2", 00:18:28.018 "trsvcid": "4420" 00:18:28.018 }, 00:18:28.018 "peer_address": { 00:18:28.018 "trtype": "TCP", 00:18:28.018 "adrfam": "IPv4", 00:18:28.018 "traddr": "10.0.0.1", 00:18:28.018 "trsvcid": "33128" 00:18:28.018 }, 00:18:28.018 "auth": { 00:18:28.018 "state": "completed", 00:18:28.018 "digest": "sha384", 00:18:28.018 "dhgroup": "null" 00:18:28.018 } 00:18:28.018 } 00:18:28.018 ]' 00:18:28.018 22:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.018 22:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.018 22:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.278 22:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:28.278 22:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.278 22:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.278 22:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.278 22:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.278 22:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.253 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.514 00:18:29.514 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.514 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.514 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.774 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.774 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.774 22:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.774 22:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.774 22:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.774 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.774 { 00:18:29.774 "cntlid": 57, 00:18:29.774 "qid": 0, 00:18:29.774 "state": "enabled", 00:18:29.774 "thread": "nvmf_tgt_poll_group_000", 00:18:29.774 "listen_address": { 00:18:29.774 "trtype": "TCP", 00:18:29.774 "adrfam": "IPv4", 00:18:29.774 "traddr": "10.0.0.2", 00:18:29.774 "trsvcid": "4420" 00:18:29.774 }, 00:18:29.774 "peer_address": { 00:18:29.774 "trtype": "TCP", 00:18:29.774 "adrfam": "IPv4", 00:18:29.774 "traddr": "10.0.0.1", 00:18:29.774 "trsvcid": "33154" 00:18:29.774 }, 00:18:29.774 "auth": { 00:18:29.774 "state": "completed", 00:18:29.774 "digest": "sha384", 00:18:29.774 "dhgroup": "ffdhe2048" 00:18:29.774 } 00:18:29.774 } 00:18:29.774 ]' 00:18:29.774 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.774 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.774 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.774 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:29.774 22:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.774 22:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.774 22:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.774 22:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.034 22:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:18:30.604 22:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.605 22:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.865 22:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.865 22:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.865 22:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.865 22:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.865 22:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:30.865 22:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:30.865 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:30.865 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.865 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.865 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:30.865 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:30.865 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.865 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.865 22:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.865 22:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.865 22:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.865 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.865 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.126 00:18:31.126 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.126 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.126 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.387 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.387 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.387 22:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.387 22:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.387 22:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.387 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.387 { 00:18:31.387 "cntlid": 59, 00:18:31.387 "qid": 0, 00:18:31.387 "state": "enabled", 00:18:31.387 "thread": "nvmf_tgt_poll_group_000", 00:18:31.387 "listen_address": { 00:18:31.387 "trtype": "TCP", 00:18:31.387 "adrfam": "IPv4", 00:18:31.387 "traddr": "10.0.0.2", 00:18:31.387 "trsvcid": "4420" 00:18:31.387 }, 00:18:31.387 "peer_address": { 00:18:31.387 "trtype": "TCP", 00:18:31.387 "adrfam": "IPv4", 00:18:31.387 "traddr": "10.0.0.1", 00:18:31.387 "trsvcid": "33166" 00:18:31.387 }, 00:18:31.387 "auth": { 00:18:31.387 "state": "completed", 00:18:31.387 "digest": "sha384", 00:18:31.387 "dhgroup": "ffdhe2048" 00:18:31.387 } 00:18:31.387 } 00:18:31.387 ]' 00:18:31.387 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.387 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.387 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.387 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:31.387 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.387 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.387 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.387 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.647 22:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:18:32.216 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.216 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.216 22:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.216 22:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.476 22:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.476 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.476 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:32.476 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:32.476 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:32.476 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.476 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.476 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:32.476 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:32.476 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.476 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.476 22:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.476 22:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.476 22:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.476 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.476 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.736 00:18:32.736 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.736 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.736 22:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.996 22:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.996 22:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.996 22:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.996 22:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.996 22:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.996 22:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.996 { 00:18:32.996 "cntlid": 61, 00:18:32.996 "qid": 0, 00:18:32.996 "state": "enabled", 00:18:32.996 "thread": "nvmf_tgt_poll_group_000", 00:18:32.996 "listen_address": { 00:18:32.996 "trtype": "TCP", 00:18:32.996 "adrfam": "IPv4", 00:18:32.996 "traddr": "10.0.0.2", 00:18:32.996 "trsvcid": "4420" 00:18:32.996 }, 00:18:32.996 "peer_address": { 00:18:32.996 "trtype": "TCP", 00:18:32.996 "adrfam": "IPv4", 00:18:32.996 "traddr": "10.0.0.1", 00:18:32.996 "trsvcid": "35446" 00:18:32.996 }, 00:18:32.996 "auth": { 00:18:32.996 "state": "completed", 00:18:32.996 "digest": "sha384", 00:18:32.996 "dhgroup": "ffdhe2048" 00:18:32.996 } 00:18:32.996 } 00:18:32.996 ]' 00:18:32.996 22:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.996 22:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.996 22:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.996 22:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:32.996 22:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.996 22:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.996 22:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.996 22:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.256 22:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:18:33.827 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.827 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.827 22:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.087 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.347 00:18:34.347 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.347 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.347 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.607 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.607 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.607 22:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.607 22:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.607 22:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.607 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.607 { 00:18:34.607 "cntlid": 63, 00:18:34.607 "qid": 0, 00:18:34.607 "state": "enabled", 00:18:34.607 "thread": "nvmf_tgt_poll_group_000", 00:18:34.607 "listen_address": { 00:18:34.607 "trtype": "TCP", 00:18:34.607 "adrfam": "IPv4", 00:18:34.607 "traddr": "10.0.0.2", 00:18:34.607 "trsvcid": "4420" 00:18:34.607 }, 00:18:34.607 "peer_address": { 00:18:34.607 "trtype": "TCP", 00:18:34.607 "adrfam": "IPv4", 00:18:34.607 "traddr": "10.0.0.1", 00:18:34.607 "trsvcid": "35478" 00:18:34.607 }, 00:18:34.607 "auth": { 00:18:34.607 "state": "completed", 00:18:34.607 "digest": "sha384", 00:18:34.607 "dhgroup": "ffdhe2048" 00:18:34.607 } 00:18:34.607 } 00:18:34.607 ]' 00:18:34.607 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.607 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.607 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.607 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:34.607 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.607 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.607 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.607 22:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.867 22:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.809 22:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.069 00:18:36.069 22:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.069 22:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.069 22:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.069 22:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.069 22:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.069 22:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.069 22:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.069 22:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.069 22:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.069 { 00:18:36.069 "cntlid": 65, 00:18:36.069 "qid": 0, 00:18:36.069 "state": "enabled", 00:18:36.069 "thread": "nvmf_tgt_poll_group_000", 00:18:36.069 "listen_address": { 00:18:36.069 "trtype": "TCP", 00:18:36.069 "adrfam": "IPv4", 00:18:36.069 "traddr": "10.0.0.2", 00:18:36.069 "trsvcid": "4420" 00:18:36.069 }, 00:18:36.069 "peer_address": { 00:18:36.069 "trtype": "TCP", 00:18:36.069 "adrfam": "IPv4", 00:18:36.069 "traddr": "10.0.0.1", 00:18:36.069 "trsvcid": "35504" 00:18:36.069 }, 00:18:36.069 "auth": { 00:18:36.069 "state": "completed", 00:18:36.069 "digest": "sha384", 00:18:36.069 "dhgroup": "ffdhe3072" 00:18:36.069 } 00:18:36.069 } 00:18:36.069 ]' 00:18:36.069 22:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.330 22:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.330 22:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.330 22:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.330 22:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.330 22:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.330 22:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.330 22:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.591 22:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:18:37.161 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.161 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.161 22:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.161 22:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.161 22:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.161 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.161 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:37.161 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:37.422 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:37.422 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.422 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:37.422 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:37.422 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:37.422 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.422 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.422 22:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.422 22:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.422 22:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.422 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.422 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.682 00:18:37.682 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.682 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.682 22:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.941 22:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.941 22:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.941 22:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.941 22:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.941 22:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.941 22:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.941 { 00:18:37.941 "cntlid": 67, 00:18:37.941 "qid": 0, 00:18:37.941 "state": "enabled", 00:18:37.941 "thread": "nvmf_tgt_poll_group_000", 00:18:37.941 "listen_address": { 00:18:37.941 "trtype": "TCP", 00:18:37.941 "adrfam": "IPv4", 00:18:37.941 "traddr": "10.0.0.2", 00:18:37.941 "trsvcid": "4420" 00:18:37.941 }, 00:18:37.941 "peer_address": { 00:18:37.941 "trtype": "TCP", 00:18:37.941 "adrfam": "IPv4", 00:18:37.941 "traddr": "10.0.0.1", 00:18:37.941 "trsvcid": "35522" 00:18:37.941 }, 00:18:37.941 "auth": { 00:18:37.941 "state": "completed", 00:18:37.941 "digest": "sha384", 00:18:37.941 "dhgroup": "ffdhe3072" 00:18:37.941 } 00:18:37.941 } 00:18:37.941 ]' 00:18:37.941 22:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.942 22:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.942 22:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.942 22:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:37.942 22:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.942 22:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.942 22:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.942 22:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.201 22:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:18:38.769 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.770 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.770 22:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.770 22:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.770 22:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.770 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.770 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:38.770 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:39.032 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:39.032 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.032 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:39.032 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:39.032 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:39.032 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.032 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.032 22:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.032 22:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.032 22:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.032 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.032 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.292 00:18:39.292 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.292 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.292 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.554 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.554 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.554 22:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.554 22:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.554 22:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.554 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.554 { 00:18:39.554 "cntlid": 69, 00:18:39.554 "qid": 0, 00:18:39.554 "state": "enabled", 00:18:39.554 "thread": "nvmf_tgt_poll_group_000", 00:18:39.554 "listen_address": { 00:18:39.554 "trtype": "TCP", 00:18:39.554 "adrfam": "IPv4", 00:18:39.554 "traddr": "10.0.0.2", 00:18:39.554 "trsvcid": "4420" 00:18:39.554 }, 00:18:39.554 "peer_address": { 00:18:39.554 "trtype": "TCP", 00:18:39.554 "adrfam": "IPv4", 00:18:39.554 "traddr": "10.0.0.1", 00:18:39.554 "trsvcid": "35548" 00:18:39.554 }, 00:18:39.554 "auth": { 00:18:39.554 "state": "completed", 00:18:39.554 "digest": "sha384", 00:18:39.554 "dhgroup": "ffdhe3072" 00:18:39.554 } 00:18:39.554 } 00:18:39.554 ]' 00:18:39.554 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.554 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.554 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.554 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:39.554 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.554 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.554 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.554 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.815 22:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.758 22:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.019 00:18:41.019 22:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.019 22:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.019 22:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.019 22:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.280 22:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.280 22:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.280 22:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.280 22:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.280 22:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.280 { 00:18:41.280 "cntlid": 71, 00:18:41.280 "qid": 0, 00:18:41.280 "state": "enabled", 00:18:41.280 "thread": "nvmf_tgt_poll_group_000", 00:18:41.280 "listen_address": { 00:18:41.280 "trtype": "TCP", 00:18:41.280 "adrfam": "IPv4", 00:18:41.280 "traddr": "10.0.0.2", 00:18:41.280 "trsvcid": "4420" 00:18:41.280 }, 00:18:41.280 "peer_address": { 00:18:41.280 "trtype": "TCP", 00:18:41.280 "adrfam": "IPv4", 00:18:41.280 "traddr": "10.0.0.1", 00:18:41.280 "trsvcid": "35578" 00:18:41.280 }, 00:18:41.280 "auth": { 00:18:41.280 "state": "completed", 00:18:41.280 "digest": "sha384", 00:18:41.280 "dhgroup": "ffdhe3072" 00:18:41.280 } 00:18:41.280 } 00:18:41.280 ]' 00:18:41.280 22:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.280 22:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.280 22:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.280 22:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.280 22:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.280 22:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.280 22:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.280 22:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.540 22:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:18:42.110 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.110 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.110 22:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.110 22:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.110 22:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.110 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.110 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.110 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.110 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.370 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:42.370 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.370 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.370 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:42.370 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:42.370 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.370 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.371 22:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.371 22:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.371 22:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.371 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.371 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.631 00:18:42.631 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.631 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.631 22:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.892 22:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.892 22:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.892 22:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.892 22:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.892 22:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.892 22:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.892 { 00:18:42.892 "cntlid": 73, 00:18:42.892 "qid": 0, 00:18:42.892 "state": "enabled", 00:18:42.892 "thread": "nvmf_tgt_poll_group_000", 00:18:42.892 "listen_address": { 00:18:42.892 "trtype": "TCP", 00:18:42.892 "adrfam": "IPv4", 00:18:42.892 "traddr": "10.0.0.2", 00:18:42.892 "trsvcid": "4420" 00:18:42.892 }, 00:18:42.892 "peer_address": { 00:18:42.892 "trtype": "TCP", 00:18:42.892 "adrfam": "IPv4", 00:18:42.892 "traddr": "10.0.0.1", 00:18:42.892 "trsvcid": "41682" 00:18:42.892 }, 00:18:42.892 "auth": { 00:18:42.892 "state": "completed", 00:18:42.892 "digest": "sha384", 00:18:42.892 "dhgroup": "ffdhe4096" 00:18:42.892 } 00:18:42.892 } 00:18:42.892 ]' 00:18:42.892 22:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.892 22:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.892 22:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.892 22:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:42.892 22:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.892 22:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.892 22:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.892 22:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.184 22:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:18:43.759 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.018 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.279 00:18:44.279 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.279 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.279 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.540 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.540 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.540 22:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.540 22:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.540 22:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.541 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.541 { 00:18:44.541 "cntlid": 75, 00:18:44.541 "qid": 0, 00:18:44.541 "state": "enabled", 00:18:44.541 "thread": "nvmf_tgt_poll_group_000", 00:18:44.541 "listen_address": { 00:18:44.541 "trtype": "TCP", 00:18:44.541 "adrfam": "IPv4", 00:18:44.541 "traddr": "10.0.0.2", 00:18:44.541 "trsvcid": "4420" 00:18:44.541 }, 00:18:44.541 "peer_address": { 00:18:44.541 "trtype": "TCP", 00:18:44.541 "adrfam": "IPv4", 00:18:44.541 "traddr": "10.0.0.1", 00:18:44.541 "trsvcid": "41704" 00:18:44.541 }, 00:18:44.541 "auth": { 00:18:44.541 "state": "completed", 00:18:44.541 "digest": "sha384", 00:18:44.541 "dhgroup": "ffdhe4096" 00:18:44.541 } 00:18:44.541 } 00:18:44.541 ]' 00:18:44.541 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.541 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.541 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.541 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:44.541 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.541 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.541 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.541 22:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.803 22:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.746 22:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.006 00:18:46.006 22:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.006 22:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.006 22:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.266 22:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.266 22:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.266 22:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.266 22:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.266 22:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.266 22:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.266 { 00:18:46.266 "cntlid": 77, 00:18:46.266 "qid": 0, 00:18:46.266 "state": "enabled", 00:18:46.266 "thread": "nvmf_tgt_poll_group_000", 00:18:46.266 "listen_address": { 00:18:46.266 "trtype": "TCP", 00:18:46.266 "adrfam": "IPv4", 00:18:46.266 "traddr": "10.0.0.2", 00:18:46.266 "trsvcid": "4420" 00:18:46.266 }, 00:18:46.266 "peer_address": { 00:18:46.266 "trtype": "TCP", 00:18:46.266 "adrfam": "IPv4", 00:18:46.266 "traddr": "10.0.0.1", 00:18:46.266 "trsvcid": "41724" 00:18:46.266 }, 00:18:46.266 "auth": { 00:18:46.266 "state": "completed", 00:18:46.266 "digest": "sha384", 00:18:46.266 "dhgroup": "ffdhe4096" 00:18:46.266 } 00:18:46.266 } 00:18:46.266 ]' 00:18:46.266 22:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.266 22:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.266 22:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.266 22:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.266 22:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.266 22:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.266 22:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.266 22:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.526 22:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.468 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.728 00:18:47.728 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.728 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.728 22:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.988 22:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.988 22:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.988 22:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.988 22:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.988 22:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.988 22:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.988 { 00:18:47.988 "cntlid": 79, 00:18:47.988 "qid": 0, 00:18:47.988 "state": "enabled", 00:18:47.988 "thread": "nvmf_tgt_poll_group_000", 00:18:47.988 "listen_address": { 00:18:47.988 "trtype": "TCP", 00:18:47.988 "adrfam": "IPv4", 00:18:47.988 "traddr": "10.0.0.2", 00:18:47.988 "trsvcid": "4420" 00:18:47.988 }, 00:18:47.988 "peer_address": { 00:18:47.988 "trtype": "TCP", 00:18:47.988 "adrfam": "IPv4", 00:18:47.988 "traddr": "10.0.0.1", 00:18:47.988 "trsvcid": "41762" 00:18:47.988 }, 00:18:47.988 "auth": { 00:18:47.988 "state": "completed", 00:18:47.988 "digest": "sha384", 00:18:47.988 "dhgroup": "ffdhe4096" 00:18:47.988 } 00:18:47.988 } 00:18:47.988 ]' 00:18:47.988 22:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.988 22:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.988 22:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.988 22:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:47.988 22:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.988 22:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.988 22:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.988 22:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.247 22:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:18:48.816 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.816 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.816 22:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.816 22:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.816 22:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.816 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.816 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.816 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.816 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:49.076 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:49.076 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.076 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.076 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:49.076 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:49.076 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.076 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.076 22:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.076 22:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.076 22:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.076 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.076 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.336 00:18:49.336 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.336 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.336 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.597 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.597 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.597 22:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.597 22:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.597 22:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.597 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.597 { 00:18:49.597 "cntlid": 81, 00:18:49.597 "qid": 0, 00:18:49.597 "state": "enabled", 00:18:49.597 "thread": "nvmf_tgt_poll_group_000", 00:18:49.597 "listen_address": { 00:18:49.597 "trtype": "TCP", 00:18:49.597 "adrfam": "IPv4", 00:18:49.597 "traddr": "10.0.0.2", 00:18:49.597 "trsvcid": "4420" 00:18:49.597 }, 00:18:49.597 "peer_address": { 00:18:49.597 "trtype": "TCP", 00:18:49.597 "adrfam": "IPv4", 00:18:49.597 "traddr": "10.0.0.1", 00:18:49.597 "trsvcid": "41780" 00:18:49.597 }, 00:18:49.597 "auth": { 00:18:49.597 "state": "completed", 00:18:49.597 "digest": "sha384", 00:18:49.597 "dhgroup": "ffdhe6144" 00:18:49.597 } 00:18:49.597 } 00:18:49.597 ]' 00:18:49.597 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.597 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.597 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.597 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:49.597 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.597 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.597 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.597 22:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.857 22:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:18:50.799 22:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.799 22:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.799 22:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.799 22:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.799 22:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.799 22:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.799 22:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:50.799 22:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:50.799 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:50.799 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.799 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.799 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:50.799 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:50.799 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.799 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.799 22:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.799 22:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.799 22:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.799 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.800 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.060 00:18:51.060 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.321 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.321 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.321 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.321 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.321 22:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.321 22:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.321 22:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.321 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.321 { 00:18:51.321 "cntlid": 83, 00:18:51.321 "qid": 0, 00:18:51.321 "state": "enabled", 00:18:51.321 "thread": "nvmf_tgt_poll_group_000", 00:18:51.321 "listen_address": { 00:18:51.321 "trtype": "TCP", 00:18:51.321 "adrfam": "IPv4", 00:18:51.321 "traddr": "10.0.0.2", 00:18:51.321 "trsvcid": "4420" 00:18:51.321 }, 00:18:51.321 "peer_address": { 00:18:51.321 "trtype": "TCP", 00:18:51.321 "adrfam": "IPv4", 00:18:51.321 "traddr": "10.0.0.1", 00:18:51.321 "trsvcid": "41812" 00:18:51.321 }, 00:18:51.321 "auth": { 00:18:51.321 "state": "completed", 00:18:51.321 "digest": "sha384", 00:18:51.321 "dhgroup": "ffdhe6144" 00:18:51.321 } 00:18:51.321 } 00:18:51.321 ]' 00:18:51.321 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.321 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.321 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.581 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:51.581 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.581 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.581 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.581 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.581 22:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.523 22:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.093 00:18:53.093 22:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.094 22:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.094 22:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.094 22:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.094 22:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.094 22:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.094 22:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.094 22:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.094 22:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.094 { 00:18:53.094 "cntlid": 85, 00:18:53.094 "qid": 0, 00:18:53.094 "state": "enabled", 00:18:53.094 "thread": "nvmf_tgt_poll_group_000", 00:18:53.094 "listen_address": { 00:18:53.094 "trtype": "TCP", 00:18:53.094 "adrfam": "IPv4", 00:18:53.094 "traddr": "10.0.0.2", 00:18:53.094 "trsvcid": "4420" 00:18:53.094 }, 00:18:53.094 "peer_address": { 00:18:53.094 "trtype": "TCP", 00:18:53.094 "adrfam": "IPv4", 00:18:53.094 "traddr": "10.0.0.1", 00:18:53.094 "trsvcid": "54484" 00:18:53.094 }, 00:18:53.094 "auth": { 00:18:53.094 "state": "completed", 00:18:53.094 "digest": "sha384", 00:18:53.094 "dhgroup": "ffdhe6144" 00:18:53.094 } 00:18:53.094 } 00:18:53.094 ]' 00:18:53.094 22:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.094 22:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.094 22:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.094 22:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.094 22:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.354 22:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.354 22:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.354 22:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.354 22:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.296 22:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.297 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.297 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.557 00:18:54.818 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.818 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.818 22:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.818 22:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.818 22:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.818 22:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.818 22:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.818 22:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.818 22:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.818 { 00:18:54.818 "cntlid": 87, 00:18:54.818 "qid": 0, 00:18:54.818 "state": "enabled", 00:18:54.818 "thread": "nvmf_tgt_poll_group_000", 00:18:54.818 "listen_address": { 00:18:54.818 "trtype": "TCP", 00:18:54.818 "adrfam": "IPv4", 00:18:54.818 "traddr": "10.0.0.2", 00:18:54.818 "trsvcid": "4420" 00:18:54.818 }, 00:18:54.818 "peer_address": { 00:18:54.818 "trtype": "TCP", 00:18:54.818 "adrfam": "IPv4", 00:18:54.818 "traddr": "10.0.0.1", 00:18:54.818 "trsvcid": "54522" 00:18:54.818 }, 00:18:54.818 "auth": { 00:18:54.818 "state": "completed", 00:18:54.818 "digest": "sha384", 00:18:54.818 "dhgroup": "ffdhe6144" 00:18:54.818 } 00:18:54.818 } 00:18:54.818 ]' 00:18:54.818 22:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.818 22:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.818 22:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.080 22:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:55.080 22:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.080 22:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.080 22:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.080 22:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.080 22:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.023 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.592 00:18:56.592 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.592 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.592 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.851 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.851 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.851 22:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.851 22:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.851 22:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.851 22:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.851 { 00:18:56.851 "cntlid": 89, 00:18:56.851 "qid": 0, 00:18:56.851 "state": "enabled", 00:18:56.851 "thread": "nvmf_tgt_poll_group_000", 00:18:56.851 "listen_address": { 00:18:56.851 "trtype": "TCP", 00:18:56.851 "adrfam": "IPv4", 00:18:56.851 "traddr": "10.0.0.2", 00:18:56.852 "trsvcid": "4420" 00:18:56.852 }, 00:18:56.852 "peer_address": { 00:18:56.852 "trtype": "TCP", 00:18:56.852 "adrfam": "IPv4", 00:18:56.852 "traddr": "10.0.0.1", 00:18:56.852 "trsvcid": "54566" 00:18:56.852 }, 00:18:56.852 "auth": { 00:18:56.852 "state": "completed", 00:18:56.852 "digest": "sha384", 00:18:56.852 "dhgroup": "ffdhe8192" 00:18:56.852 } 00:18:56.852 } 00:18:56.852 ]' 00:18:56.852 22:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.852 22:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.852 22:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.852 22:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:56.852 22:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.852 22:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.852 22:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.852 22:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.111 22:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:18:57.679 22:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.679 22:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.679 22:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.679 22:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.938 22:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.938 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.938 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:57.938 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:57.938 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:57.938 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.938 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:57.938 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:57.938 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:57.938 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.938 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.938 22:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.938 22:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.938 22:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.938 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.938 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.537 00:18:58.537 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.537 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.537 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.796 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.796 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.796 22:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.796 22:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.796 22:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.796 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.796 { 00:18:58.796 "cntlid": 91, 00:18:58.796 "qid": 0, 00:18:58.796 "state": "enabled", 00:18:58.796 "thread": "nvmf_tgt_poll_group_000", 00:18:58.796 "listen_address": { 00:18:58.796 "trtype": "TCP", 00:18:58.796 "adrfam": "IPv4", 00:18:58.796 "traddr": "10.0.0.2", 00:18:58.796 "trsvcid": "4420" 00:18:58.796 }, 00:18:58.796 "peer_address": { 00:18:58.796 "trtype": "TCP", 00:18:58.796 "adrfam": "IPv4", 00:18:58.796 "traddr": "10.0.0.1", 00:18:58.796 "trsvcid": "54604" 00:18:58.796 }, 00:18:58.796 "auth": { 00:18:58.796 "state": "completed", 00:18:58.796 "digest": "sha384", 00:18:58.796 "dhgroup": "ffdhe8192" 00:18:58.796 } 00:18:58.796 } 00:18:58.796 ]' 00:18:58.796 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.796 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.796 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.796 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:58.796 22:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.796 22:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.796 22:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.796 22:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.056 22:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:18:59.624 22:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.624 22:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.624 22:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.624 22:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.624 22:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.624 22:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.624 22:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:59.624 22:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:59.882 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:59.882 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.882 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:59.882 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:59.882 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:59.882 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.882 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.882 22:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.882 22:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.882 22:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.882 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.882 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.452 00:19:00.452 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.452 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.452 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.712 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.712 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.712 22:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.712 22:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.712 22:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.712 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.712 { 00:19:00.712 "cntlid": 93, 00:19:00.712 "qid": 0, 00:19:00.712 "state": "enabled", 00:19:00.712 "thread": "nvmf_tgt_poll_group_000", 00:19:00.712 "listen_address": { 00:19:00.712 "trtype": "TCP", 00:19:00.712 "adrfam": "IPv4", 00:19:00.712 "traddr": "10.0.0.2", 00:19:00.712 "trsvcid": "4420" 00:19:00.712 }, 00:19:00.712 "peer_address": { 00:19:00.712 "trtype": "TCP", 00:19:00.712 "adrfam": "IPv4", 00:19:00.712 "traddr": "10.0.0.1", 00:19:00.712 "trsvcid": "54618" 00:19:00.712 }, 00:19:00.712 "auth": { 00:19:00.712 "state": "completed", 00:19:00.712 "digest": "sha384", 00:19:00.712 "dhgroup": "ffdhe8192" 00:19:00.712 } 00:19:00.712 } 00:19:00.712 ]' 00:19:00.712 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.712 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.712 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.712 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:00.712 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.712 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.712 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.712 22:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.972 22:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:19:01.543 22:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.543 22:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.543 22:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.543 22:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.543 22:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.543 22:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.543 22:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:01.543 22:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:01.803 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:01.803 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.803 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.803 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:01.803 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:01.803 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.803 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:01.803 22:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.803 22:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.803 22:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.803 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.803 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.374 00:19:02.374 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.374 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.374 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.634 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.634 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.634 22:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.634 22:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.634 22:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.634 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.634 { 00:19:02.634 "cntlid": 95, 00:19:02.634 "qid": 0, 00:19:02.634 "state": "enabled", 00:19:02.634 "thread": "nvmf_tgt_poll_group_000", 00:19:02.634 "listen_address": { 00:19:02.634 "trtype": "TCP", 00:19:02.634 "adrfam": "IPv4", 00:19:02.634 "traddr": "10.0.0.2", 00:19:02.634 "trsvcid": "4420" 00:19:02.634 }, 00:19:02.634 "peer_address": { 00:19:02.634 "trtype": "TCP", 00:19:02.634 "adrfam": "IPv4", 00:19:02.634 "traddr": "10.0.0.1", 00:19:02.634 "trsvcid": "54640" 00:19:02.634 }, 00:19:02.634 "auth": { 00:19:02.634 "state": "completed", 00:19:02.634 "digest": "sha384", 00:19:02.634 "dhgroup": "ffdhe8192" 00:19:02.634 } 00:19:02.634 } 00:19:02.634 ]' 00:19:02.634 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.634 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.634 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.634 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:02.634 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.634 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.634 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.634 22:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.894 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:19:03.464 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.464 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.464 22:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.464 22:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.464 22:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.464 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:03.464 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.464 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.464 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:03.464 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:03.725 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:03.725 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.725 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.725 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:03.725 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:03.725 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.725 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.725 22:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.725 22:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.725 22:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.725 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.725 22:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.984 00:19:03.984 22:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.984 22:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.984 22:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.245 22:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.245 22:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.245 22:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.245 22:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.245 22:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.245 22:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.245 { 00:19:04.245 "cntlid": 97, 00:19:04.245 "qid": 0, 00:19:04.245 "state": "enabled", 00:19:04.245 "thread": "nvmf_tgt_poll_group_000", 00:19:04.245 "listen_address": { 00:19:04.245 "trtype": "TCP", 00:19:04.245 "adrfam": "IPv4", 00:19:04.245 "traddr": "10.0.0.2", 00:19:04.245 "trsvcid": "4420" 00:19:04.245 }, 00:19:04.245 "peer_address": { 00:19:04.245 "trtype": "TCP", 00:19:04.245 "adrfam": "IPv4", 00:19:04.245 "traddr": "10.0.0.1", 00:19:04.245 "trsvcid": "38212" 00:19:04.245 }, 00:19:04.245 "auth": { 00:19:04.245 "state": "completed", 00:19:04.245 "digest": "sha512", 00:19:04.245 "dhgroup": "null" 00:19:04.245 } 00:19:04.245 } 00:19:04.245 ]' 00:19:04.245 22:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.245 22:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.245 22:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.245 22:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:04.245 22:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.245 22:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.245 22:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.245 22:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.506 22:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.460 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.721 00:19:05.721 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.721 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.721 22:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.721 22:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.721 22:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.721 22:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.721 22:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.721 22:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.721 22:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.721 { 00:19:05.721 "cntlid": 99, 00:19:05.721 "qid": 0, 00:19:05.721 "state": "enabled", 00:19:05.721 "thread": "nvmf_tgt_poll_group_000", 00:19:05.721 "listen_address": { 00:19:05.721 "trtype": "TCP", 00:19:05.721 "adrfam": "IPv4", 00:19:05.721 "traddr": "10.0.0.2", 00:19:05.721 "trsvcid": "4420" 00:19:05.721 }, 00:19:05.721 "peer_address": { 00:19:05.721 "trtype": "TCP", 00:19:05.721 "adrfam": "IPv4", 00:19:05.721 "traddr": "10.0.0.1", 00:19:05.721 "trsvcid": "38238" 00:19:05.721 }, 00:19:05.721 "auth": { 00:19:05.721 "state": "completed", 00:19:05.721 "digest": "sha512", 00:19:05.721 "dhgroup": "null" 00:19:05.721 } 00:19:05.721 } 00:19:05.721 ]' 00:19:05.721 22:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.981 22:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.981 22:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.981 22:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:05.981 22:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.981 22:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.981 22:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.981 22:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.241 22:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:19:06.812 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.812 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.812 22:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.812 22:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.812 22:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.812 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.812 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:06.812 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.072 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:07.072 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.072 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.072 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:07.072 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:07.072 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.072 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.072 22:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.072 22:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.072 22:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.072 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.072 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.333 00:19:07.333 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.333 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.333 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.333 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.333 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.333 22:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.333 22:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.333 22:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.333 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.333 { 00:19:07.333 "cntlid": 101, 00:19:07.333 "qid": 0, 00:19:07.333 "state": "enabled", 00:19:07.333 "thread": "nvmf_tgt_poll_group_000", 00:19:07.333 "listen_address": { 00:19:07.333 "trtype": "TCP", 00:19:07.333 "adrfam": "IPv4", 00:19:07.333 "traddr": "10.0.0.2", 00:19:07.333 "trsvcid": "4420" 00:19:07.333 }, 00:19:07.333 "peer_address": { 00:19:07.333 "trtype": "TCP", 00:19:07.333 "adrfam": "IPv4", 00:19:07.333 "traddr": "10.0.0.1", 00:19:07.333 "trsvcid": "38264" 00:19:07.333 }, 00:19:07.333 "auth": { 00:19:07.333 "state": "completed", 00:19:07.333 "digest": "sha512", 00:19:07.333 "dhgroup": "null" 00:19:07.333 } 00:19:07.333 } 00:19:07.333 ]' 00:19:07.333 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.649 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.649 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.649 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:07.649 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.649 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.649 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.649 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.649 22:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.590 22:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.591 22:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.591 22:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.851 00:19:08.851 22:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.851 22:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.851 22:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.112 22:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.112 22:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.112 22:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.112 22:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.112 22:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.112 22:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.112 { 00:19:09.112 "cntlid": 103, 00:19:09.112 "qid": 0, 00:19:09.112 "state": "enabled", 00:19:09.112 "thread": "nvmf_tgt_poll_group_000", 00:19:09.112 "listen_address": { 00:19:09.112 "trtype": "TCP", 00:19:09.112 "adrfam": "IPv4", 00:19:09.112 "traddr": "10.0.0.2", 00:19:09.112 "trsvcid": "4420" 00:19:09.112 }, 00:19:09.112 "peer_address": { 00:19:09.112 "trtype": "TCP", 00:19:09.112 "adrfam": "IPv4", 00:19:09.112 "traddr": "10.0.0.1", 00:19:09.112 "trsvcid": "38288" 00:19:09.112 }, 00:19:09.112 "auth": { 00:19:09.112 "state": "completed", 00:19:09.112 "digest": "sha512", 00:19:09.112 "dhgroup": "null" 00:19:09.112 } 00:19:09.112 } 00:19:09.112 ]' 00:19:09.112 22:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.112 22:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.112 22:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.112 22:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:09.112 22:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.373 22:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.373 22:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.373 22:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.373 22:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:19:09.945 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.945 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.945 22:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.945 22:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.945 22:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.945 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.945 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.945 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.945 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:10.205 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:10.205 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.205 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.205 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:10.205 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:10.205 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.205 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.205 22:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.205 22:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.205 22:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.205 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.205 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.466 00:19:10.466 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.466 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.466 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.727 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.727 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.727 22:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.727 22:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.727 22:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.727 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.727 { 00:19:10.727 "cntlid": 105, 00:19:10.727 "qid": 0, 00:19:10.727 "state": "enabled", 00:19:10.727 "thread": "nvmf_tgt_poll_group_000", 00:19:10.727 "listen_address": { 00:19:10.727 "trtype": "TCP", 00:19:10.727 "adrfam": "IPv4", 00:19:10.727 "traddr": "10.0.0.2", 00:19:10.727 "trsvcid": "4420" 00:19:10.727 }, 00:19:10.727 "peer_address": { 00:19:10.727 "trtype": "TCP", 00:19:10.727 "adrfam": "IPv4", 00:19:10.727 "traddr": "10.0.0.1", 00:19:10.727 "trsvcid": "38310" 00:19:10.727 }, 00:19:10.727 "auth": { 00:19:10.727 "state": "completed", 00:19:10.727 "digest": "sha512", 00:19:10.727 "dhgroup": "ffdhe2048" 00:19:10.727 } 00:19:10.727 } 00:19:10.727 ]' 00:19:10.727 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.727 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.727 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.727 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.727 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.727 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.727 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.727 22:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.988 22:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:19:11.559 22:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.819 22:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.819 22:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.819 22:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.819 22:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.819 22:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.819 22:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:11.819 22:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:11.819 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:11.819 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.819 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:11.819 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:11.819 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:11.819 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.819 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.819 22:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.819 22:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.819 22:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.819 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.819 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.084 00:19:12.084 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.084 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.084 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.345 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.345 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.345 22:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.345 22:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.345 22:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.345 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.345 { 00:19:12.345 "cntlid": 107, 00:19:12.345 "qid": 0, 00:19:12.345 "state": "enabled", 00:19:12.345 "thread": "nvmf_tgt_poll_group_000", 00:19:12.345 "listen_address": { 00:19:12.345 "trtype": "TCP", 00:19:12.345 "adrfam": "IPv4", 00:19:12.345 "traddr": "10.0.0.2", 00:19:12.345 "trsvcid": "4420" 00:19:12.345 }, 00:19:12.345 "peer_address": { 00:19:12.345 "trtype": "TCP", 00:19:12.345 "adrfam": "IPv4", 00:19:12.345 "traddr": "10.0.0.1", 00:19:12.345 "trsvcid": "38350" 00:19:12.345 }, 00:19:12.345 "auth": { 00:19:12.345 "state": "completed", 00:19:12.345 "digest": "sha512", 00:19:12.345 "dhgroup": "ffdhe2048" 00:19:12.345 } 00:19:12.345 } 00:19:12.345 ]' 00:19:12.345 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.345 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.345 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.345 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:12.345 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.345 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.345 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.345 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.635 22:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:19:13.231 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.231 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.231 22:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.231 22:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.231 22:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.231 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.231 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.231 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.492 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:13.492 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.492 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:13.492 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:13.492 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:13.492 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.492 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.492 22:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.492 22:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.492 22:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.492 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.492 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.753 00:19:13.753 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.753 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.753 22:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.753 22:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.753 22:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.753 22:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.753 22:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.014 22:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.014 22:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.014 { 00:19:14.014 "cntlid": 109, 00:19:14.014 "qid": 0, 00:19:14.014 "state": "enabled", 00:19:14.014 "thread": "nvmf_tgt_poll_group_000", 00:19:14.014 "listen_address": { 00:19:14.014 "trtype": "TCP", 00:19:14.014 "adrfam": "IPv4", 00:19:14.014 "traddr": "10.0.0.2", 00:19:14.014 "trsvcid": "4420" 00:19:14.014 }, 00:19:14.014 "peer_address": { 00:19:14.014 "trtype": "TCP", 00:19:14.014 "adrfam": "IPv4", 00:19:14.014 "traddr": "10.0.0.1", 00:19:14.014 "trsvcid": "40184" 00:19:14.014 }, 00:19:14.014 "auth": { 00:19:14.014 "state": "completed", 00:19:14.014 "digest": "sha512", 00:19:14.014 "dhgroup": "ffdhe2048" 00:19:14.014 } 00:19:14.014 } 00:19:14.014 ]' 00:19:14.014 22:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.014 22:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.014 22:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.014 22:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.014 22:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.014 22:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.014 22:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.014 22:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.276 22:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:19:14.847 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.847 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.847 22:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.847 22:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.847 22:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.847 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.847 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:14.847 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:15.108 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:15.108 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.108 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.108 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:15.108 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:15.108 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.108 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:15.108 22:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.108 22:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.108 22:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.108 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.108 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.368 00:19:15.368 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.368 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.368 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.629 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.629 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.629 22:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.629 22:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.629 22:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.629 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.629 { 00:19:15.629 "cntlid": 111, 00:19:15.629 "qid": 0, 00:19:15.629 "state": "enabled", 00:19:15.629 "thread": "nvmf_tgt_poll_group_000", 00:19:15.629 "listen_address": { 00:19:15.629 "trtype": "TCP", 00:19:15.629 "adrfam": "IPv4", 00:19:15.629 "traddr": "10.0.0.2", 00:19:15.629 "trsvcid": "4420" 00:19:15.629 }, 00:19:15.629 "peer_address": { 00:19:15.629 "trtype": "TCP", 00:19:15.629 "adrfam": "IPv4", 00:19:15.629 "traddr": "10.0.0.1", 00:19:15.629 "trsvcid": "40212" 00:19:15.629 }, 00:19:15.629 "auth": { 00:19:15.629 "state": "completed", 00:19:15.629 "digest": "sha512", 00:19:15.629 "dhgroup": "ffdhe2048" 00:19:15.629 } 00:19:15.629 } 00:19:15.629 ]' 00:19:15.629 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.629 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.629 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.629 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:15.629 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.629 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.629 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.629 22:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.890 22:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:19:16.462 22:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.462 22:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.462 22:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.462 22:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.462 22:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.462 22:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.462 22:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.462 22:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:16.462 22:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:16.725 22:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:16.725 22:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.725 22:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.725 22:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:16.725 22:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:16.725 22:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.725 22:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.725 22:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.725 22:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.725 22:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.725 22:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.725 22:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.985 00:19:16.985 22:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.985 22:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.985 22:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.247 22:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.247 22:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.247 22:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.247 22:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.247 22:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.247 22:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.247 { 00:19:17.247 "cntlid": 113, 00:19:17.247 "qid": 0, 00:19:17.247 "state": "enabled", 00:19:17.247 "thread": "nvmf_tgt_poll_group_000", 00:19:17.247 "listen_address": { 00:19:17.247 "trtype": "TCP", 00:19:17.247 "adrfam": "IPv4", 00:19:17.247 "traddr": "10.0.0.2", 00:19:17.247 "trsvcid": "4420" 00:19:17.247 }, 00:19:17.247 "peer_address": { 00:19:17.247 "trtype": "TCP", 00:19:17.247 "adrfam": "IPv4", 00:19:17.247 "traddr": "10.0.0.1", 00:19:17.247 "trsvcid": "40246" 00:19:17.247 }, 00:19:17.247 "auth": { 00:19:17.247 "state": "completed", 00:19:17.247 "digest": "sha512", 00:19:17.247 "dhgroup": "ffdhe3072" 00:19:17.247 } 00:19:17.247 } 00:19:17.247 ]' 00:19:17.247 22:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.247 22:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.247 22:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.247 22:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:17.247 22:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.247 22:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.247 22:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.247 22:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.508 22:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:19:18.080 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.080 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.080 22:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.080 22:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.340 22:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.340 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.340 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.340 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.340 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:18.340 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.340 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.340 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:18.340 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:18.340 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.340 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.340 22:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.340 22:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.340 22:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.340 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.340 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.601 00:19:18.601 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.601 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.601 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.862 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.862 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.862 22:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.862 22:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.862 22:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.862 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.862 { 00:19:18.862 "cntlid": 115, 00:19:18.862 "qid": 0, 00:19:18.862 "state": "enabled", 00:19:18.862 "thread": "nvmf_tgt_poll_group_000", 00:19:18.862 "listen_address": { 00:19:18.862 "trtype": "TCP", 00:19:18.862 "adrfam": "IPv4", 00:19:18.862 "traddr": "10.0.0.2", 00:19:18.862 "trsvcid": "4420" 00:19:18.862 }, 00:19:18.862 "peer_address": { 00:19:18.862 "trtype": "TCP", 00:19:18.862 "adrfam": "IPv4", 00:19:18.862 "traddr": "10.0.0.1", 00:19:18.862 "trsvcid": "40276" 00:19:18.862 }, 00:19:18.862 "auth": { 00:19:18.862 "state": "completed", 00:19:18.862 "digest": "sha512", 00:19:18.862 "dhgroup": "ffdhe3072" 00:19:18.862 } 00:19:18.862 } 00:19:18.862 ]' 00:19:18.862 22:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.862 22:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.862 22:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.862 22:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.862 22:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.862 22:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.862 22:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.862 22:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.123 22:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.063 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.323 00:19:20.323 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.323 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.323 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.323 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.323 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.323 22:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.323 22:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.323 22:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.323 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.323 { 00:19:20.323 "cntlid": 117, 00:19:20.323 "qid": 0, 00:19:20.323 "state": "enabled", 00:19:20.323 "thread": "nvmf_tgt_poll_group_000", 00:19:20.323 "listen_address": { 00:19:20.323 "trtype": "TCP", 00:19:20.323 "adrfam": "IPv4", 00:19:20.323 "traddr": "10.0.0.2", 00:19:20.323 "trsvcid": "4420" 00:19:20.323 }, 00:19:20.323 "peer_address": { 00:19:20.323 "trtype": "TCP", 00:19:20.323 "adrfam": "IPv4", 00:19:20.323 "traddr": "10.0.0.1", 00:19:20.323 "trsvcid": "40288" 00:19:20.323 }, 00:19:20.323 "auth": { 00:19:20.323 "state": "completed", 00:19:20.323 "digest": "sha512", 00:19:20.323 "dhgroup": "ffdhe3072" 00:19:20.323 } 00:19:20.323 } 00:19:20.323 ]' 00:19:20.323 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.583 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.583 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.583 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.583 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.583 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.583 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.583 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.843 22:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:19:21.415 22:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.415 22:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.415 22:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.415 22:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.415 22:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.415 22:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.415 22:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:21.415 22:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:21.676 22:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:21.676 22:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.676 22:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.676 22:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:21.676 22:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:21.676 22:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.676 22:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:21.676 22:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.676 22:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.676 22:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.676 22:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.676 22:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.937 00:19:21.937 22:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.937 22:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.937 22:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.207 22:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.208 22:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.208 22:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.208 22:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.208 22:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.208 22:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.208 { 00:19:22.208 "cntlid": 119, 00:19:22.208 "qid": 0, 00:19:22.208 "state": "enabled", 00:19:22.208 "thread": "nvmf_tgt_poll_group_000", 00:19:22.208 "listen_address": { 00:19:22.208 "trtype": "TCP", 00:19:22.208 "adrfam": "IPv4", 00:19:22.208 "traddr": "10.0.0.2", 00:19:22.208 "trsvcid": "4420" 00:19:22.208 }, 00:19:22.208 "peer_address": { 00:19:22.208 "trtype": "TCP", 00:19:22.208 "adrfam": "IPv4", 00:19:22.208 "traddr": "10.0.0.1", 00:19:22.208 "trsvcid": "40328" 00:19:22.208 }, 00:19:22.208 "auth": { 00:19:22.208 "state": "completed", 00:19:22.208 "digest": "sha512", 00:19:22.208 "dhgroup": "ffdhe3072" 00:19:22.208 } 00:19:22.208 } 00:19:22.208 ]' 00:19:22.208 22:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.208 22:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.208 22:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.208 22:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.208 22:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.208 22:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.208 22:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.208 22:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.471 22:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:19:23.041 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.041 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.041 22:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.041 22:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.041 22:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.041 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.041 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.041 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:23.041 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:23.303 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:23.303 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.303 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.303 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:23.303 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:23.303 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.303 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.303 22:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.303 22:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.303 22:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.303 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.303 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.564 00:19:23.564 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.564 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.564 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.824 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.824 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.824 22:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.824 22:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.824 22:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.824 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.824 { 00:19:23.824 "cntlid": 121, 00:19:23.824 "qid": 0, 00:19:23.824 "state": "enabled", 00:19:23.824 "thread": "nvmf_tgt_poll_group_000", 00:19:23.824 "listen_address": { 00:19:23.824 "trtype": "TCP", 00:19:23.824 "adrfam": "IPv4", 00:19:23.824 "traddr": "10.0.0.2", 00:19:23.824 "trsvcid": "4420" 00:19:23.824 }, 00:19:23.824 "peer_address": { 00:19:23.824 "trtype": "TCP", 00:19:23.824 "adrfam": "IPv4", 00:19:23.824 "traddr": "10.0.0.1", 00:19:23.824 "trsvcid": "44660" 00:19:23.824 }, 00:19:23.824 "auth": { 00:19:23.824 "state": "completed", 00:19:23.824 "digest": "sha512", 00:19:23.824 "dhgroup": "ffdhe4096" 00:19:23.824 } 00:19:23.824 } 00:19:23.824 ]' 00:19:23.824 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.824 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.824 22:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.824 22:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:23.824 22:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.824 22:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.824 22:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.824 22:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.084 22:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:19:25.021 22:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.021 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.303 00:19:25.303 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.303 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.303 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.563 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.563 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.563 22:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.563 22:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.563 22:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.563 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.563 { 00:19:25.563 "cntlid": 123, 00:19:25.563 "qid": 0, 00:19:25.563 "state": "enabled", 00:19:25.563 "thread": "nvmf_tgt_poll_group_000", 00:19:25.563 "listen_address": { 00:19:25.563 "trtype": "TCP", 00:19:25.563 "adrfam": "IPv4", 00:19:25.563 "traddr": "10.0.0.2", 00:19:25.563 "trsvcid": "4420" 00:19:25.563 }, 00:19:25.563 "peer_address": { 00:19:25.563 "trtype": "TCP", 00:19:25.563 "adrfam": "IPv4", 00:19:25.563 "traddr": "10.0.0.1", 00:19:25.563 "trsvcid": "44684" 00:19:25.563 }, 00:19:25.563 "auth": { 00:19:25.563 "state": "completed", 00:19:25.563 "digest": "sha512", 00:19:25.563 "dhgroup": "ffdhe4096" 00:19:25.563 } 00:19:25.563 } 00:19:25.563 ]' 00:19:25.563 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.563 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.563 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.563 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.563 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.563 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.563 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.563 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.823 22:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:19:26.392 22:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.392 22:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.392 22:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.392 22:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.392 22:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.392 22:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.392 22:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:26.392 22:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:26.652 22:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:26.652 22:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.652 22:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.652 22:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:26.652 22:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:26.652 22:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.652 22:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.652 22:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.652 22:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.652 22:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.652 22:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.652 22:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.920 00:19:26.920 22:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.920 22:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.920 22:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.182 22:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.182 22:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.182 22:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.182 22:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.182 22:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.182 22:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.182 { 00:19:27.182 "cntlid": 125, 00:19:27.182 "qid": 0, 00:19:27.182 "state": "enabled", 00:19:27.182 "thread": "nvmf_tgt_poll_group_000", 00:19:27.182 "listen_address": { 00:19:27.182 "trtype": "TCP", 00:19:27.182 "adrfam": "IPv4", 00:19:27.182 "traddr": "10.0.0.2", 00:19:27.182 "trsvcid": "4420" 00:19:27.182 }, 00:19:27.182 "peer_address": { 00:19:27.182 "trtype": "TCP", 00:19:27.182 "adrfam": "IPv4", 00:19:27.182 "traddr": "10.0.0.1", 00:19:27.182 "trsvcid": "44726" 00:19:27.182 }, 00:19:27.182 "auth": { 00:19:27.182 "state": "completed", 00:19:27.182 "digest": "sha512", 00:19:27.182 "dhgroup": "ffdhe4096" 00:19:27.182 } 00:19:27.182 } 00:19:27.182 ]' 00:19:27.182 22:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.182 22:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.182 22:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.182 22:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.182 22:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.182 22:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.182 22:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.182 22:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.518 22:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:19:28.101 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.101 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:28.101 22:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.101 22:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.101 22:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.101 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.101 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:28.101 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:28.363 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:28.363 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.363 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.363 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:28.363 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:28.363 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.363 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:28.363 22:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.363 22:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.363 22:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.363 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.363 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.622 00:19:28.622 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.622 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.622 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.882 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.882 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.882 22:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.882 22:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.882 22:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.882 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.882 { 00:19:28.882 "cntlid": 127, 00:19:28.882 "qid": 0, 00:19:28.882 "state": "enabled", 00:19:28.882 "thread": "nvmf_tgt_poll_group_000", 00:19:28.882 "listen_address": { 00:19:28.882 "trtype": "TCP", 00:19:28.882 "adrfam": "IPv4", 00:19:28.882 "traddr": "10.0.0.2", 00:19:28.882 "trsvcid": "4420" 00:19:28.882 }, 00:19:28.882 "peer_address": { 00:19:28.882 "trtype": "TCP", 00:19:28.882 "adrfam": "IPv4", 00:19:28.882 "traddr": "10.0.0.1", 00:19:28.882 "trsvcid": "44746" 00:19:28.882 }, 00:19:28.882 "auth": { 00:19:28.882 "state": "completed", 00:19:28.882 "digest": "sha512", 00:19:28.882 "dhgroup": "ffdhe4096" 00:19:28.882 } 00:19:28.882 } 00:19:28.882 ]' 00:19:28.882 22:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.882 22:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.882 22:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.882 22:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.882 22:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.882 22:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.882 22:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.882 22:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.141 22:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:19:29.709 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.709 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.709 22:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.709 22:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.969 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.540 00:19:30.540 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.540 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.540 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.540 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.540 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.540 22:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.540 22:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.540 22:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.540 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.540 { 00:19:30.540 "cntlid": 129, 00:19:30.540 "qid": 0, 00:19:30.540 "state": "enabled", 00:19:30.540 "thread": "nvmf_tgt_poll_group_000", 00:19:30.540 "listen_address": { 00:19:30.540 "trtype": "TCP", 00:19:30.540 "adrfam": "IPv4", 00:19:30.540 "traddr": "10.0.0.2", 00:19:30.540 "trsvcid": "4420" 00:19:30.540 }, 00:19:30.540 "peer_address": { 00:19:30.540 "trtype": "TCP", 00:19:30.540 "adrfam": "IPv4", 00:19:30.540 "traddr": "10.0.0.1", 00:19:30.540 "trsvcid": "44776" 00:19:30.540 }, 00:19:30.540 "auth": { 00:19:30.540 "state": "completed", 00:19:30.540 "digest": "sha512", 00:19:30.540 "dhgroup": "ffdhe6144" 00:19:30.540 } 00:19:30.540 } 00:19:30.540 ]' 00:19:30.540 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.540 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.540 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.540 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:30.540 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.800 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.800 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.800 22:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.800 22:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.740 22:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.310 00:19:32.310 22:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.310 22:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.310 22:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.310 22:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.310 22:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.310 22:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.310 22:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.310 22:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.310 22:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.310 { 00:19:32.310 "cntlid": 131, 00:19:32.310 "qid": 0, 00:19:32.310 "state": "enabled", 00:19:32.310 "thread": "nvmf_tgt_poll_group_000", 00:19:32.310 "listen_address": { 00:19:32.310 "trtype": "TCP", 00:19:32.310 "adrfam": "IPv4", 00:19:32.310 "traddr": "10.0.0.2", 00:19:32.310 "trsvcid": "4420" 00:19:32.310 }, 00:19:32.310 "peer_address": { 00:19:32.310 "trtype": "TCP", 00:19:32.310 "adrfam": "IPv4", 00:19:32.310 "traddr": "10.0.0.1", 00:19:32.310 "trsvcid": "44810" 00:19:32.310 }, 00:19:32.310 "auth": { 00:19:32.310 "state": "completed", 00:19:32.310 "digest": "sha512", 00:19:32.310 "dhgroup": "ffdhe6144" 00:19:32.310 } 00:19:32.310 } 00:19:32.310 ]' 00:19:32.310 22:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.310 22:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.310 22:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.310 22:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.310 22:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.570 22:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.570 22:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.570 22:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.570 22:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.511 22:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.082 00:19:34.082 22:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.082 22:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.082 22:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.082 22:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.082 22:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.082 22:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.082 22:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.082 22:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.082 22:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.082 { 00:19:34.082 "cntlid": 133, 00:19:34.082 "qid": 0, 00:19:34.082 "state": "enabled", 00:19:34.082 "thread": "nvmf_tgt_poll_group_000", 00:19:34.082 "listen_address": { 00:19:34.082 "trtype": "TCP", 00:19:34.082 "adrfam": "IPv4", 00:19:34.082 "traddr": "10.0.0.2", 00:19:34.082 "trsvcid": "4420" 00:19:34.082 }, 00:19:34.082 "peer_address": { 00:19:34.082 "trtype": "TCP", 00:19:34.082 "adrfam": "IPv4", 00:19:34.082 "traddr": "10.0.0.1", 00:19:34.082 "trsvcid": "44008" 00:19:34.082 }, 00:19:34.082 "auth": { 00:19:34.082 "state": "completed", 00:19:34.082 "digest": "sha512", 00:19:34.082 "dhgroup": "ffdhe6144" 00:19:34.082 } 00:19:34.082 } 00:19:34.082 ]' 00:19:34.082 22:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.082 22:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.082 22:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.341 22:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.341 22:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.341 22:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.341 22:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.341 22:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.341 22:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:19:35.277 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.278 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:35.278 22:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.278 22:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.278 22:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.278 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.278 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.278 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.536 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:35.536 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.536 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.536 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.536 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:35.536 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.536 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:35.536 22:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.536 22:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.536 22:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.536 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.536 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.795 00:19:35.795 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.795 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.795 22:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.054 22:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.054 22:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.054 22:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.054 22:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.054 22:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.054 22:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.054 { 00:19:36.054 "cntlid": 135, 00:19:36.054 "qid": 0, 00:19:36.054 "state": "enabled", 00:19:36.054 "thread": "nvmf_tgt_poll_group_000", 00:19:36.054 "listen_address": { 00:19:36.054 "trtype": "TCP", 00:19:36.054 "adrfam": "IPv4", 00:19:36.054 "traddr": "10.0.0.2", 00:19:36.054 "trsvcid": "4420" 00:19:36.054 }, 00:19:36.054 "peer_address": { 00:19:36.054 "trtype": "TCP", 00:19:36.054 "adrfam": "IPv4", 00:19:36.054 "traddr": "10.0.0.1", 00:19:36.054 "trsvcid": "44040" 00:19:36.054 }, 00:19:36.054 "auth": { 00:19:36.054 "state": "completed", 00:19:36.054 "digest": "sha512", 00:19:36.054 "dhgroup": "ffdhe6144" 00:19:36.054 } 00:19:36.054 } 00:19:36.054 ]' 00:19:36.054 22:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.054 22:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.054 22:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.054 22:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.054 22:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.054 22:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.054 22:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.054 22:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.314 22:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:19:36.884 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.145 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.714 00:19:37.714 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.714 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.714 22:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.973 22:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.973 22:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.973 22:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.973 22:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.973 22:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.973 22:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.973 { 00:19:37.973 "cntlid": 137, 00:19:37.973 "qid": 0, 00:19:37.973 "state": "enabled", 00:19:37.973 "thread": "nvmf_tgt_poll_group_000", 00:19:37.973 "listen_address": { 00:19:37.973 "trtype": "TCP", 00:19:37.973 "adrfam": "IPv4", 00:19:37.973 "traddr": "10.0.0.2", 00:19:37.973 "trsvcid": "4420" 00:19:37.973 }, 00:19:37.973 "peer_address": { 00:19:37.973 "trtype": "TCP", 00:19:37.973 "adrfam": "IPv4", 00:19:37.973 "traddr": "10.0.0.1", 00:19:37.973 "trsvcid": "44068" 00:19:37.973 }, 00:19:37.973 "auth": { 00:19:37.973 "state": "completed", 00:19:37.973 "digest": "sha512", 00:19:37.973 "dhgroup": "ffdhe8192" 00:19:37.973 } 00:19:37.973 } 00:19:37.973 ]' 00:19:37.973 22:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.973 22:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.973 22:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.973 22:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:37.973 22:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.973 22:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.973 22:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.973 22:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.233 22:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:19:38.804 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.804 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.804 22:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.804 22:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.804 22:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.804 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.804 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:38.804 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.066 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:39.066 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.066 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.066 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:39.066 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:39.066 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.066 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.066 22:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.066 22:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.066 22:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.066 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.066 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.635 00:19:39.635 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.635 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.635 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.894 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.894 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.894 22:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.894 22:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.894 22:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.894 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.894 { 00:19:39.894 "cntlid": 139, 00:19:39.894 "qid": 0, 00:19:39.894 "state": "enabled", 00:19:39.894 "thread": "nvmf_tgt_poll_group_000", 00:19:39.894 "listen_address": { 00:19:39.894 "trtype": "TCP", 00:19:39.894 "adrfam": "IPv4", 00:19:39.894 "traddr": "10.0.0.2", 00:19:39.894 "trsvcid": "4420" 00:19:39.894 }, 00:19:39.894 "peer_address": { 00:19:39.894 "trtype": "TCP", 00:19:39.894 "adrfam": "IPv4", 00:19:39.894 "traddr": "10.0.0.1", 00:19:39.894 "trsvcid": "44098" 00:19:39.894 }, 00:19:39.894 "auth": { 00:19:39.894 "state": "completed", 00:19:39.894 "digest": "sha512", 00:19:39.894 "dhgroup": "ffdhe8192" 00:19:39.894 } 00:19:39.894 } 00:19:39.894 ]' 00:19:39.894 22:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.894 22:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.894 22:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.894 22:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:39.894 22:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.894 22:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.894 22:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.894 22:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.153 22:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZjUyOGMzZWYyNWE3MWZhYTdlMjE4OWE3MThiYTQ3NjKGD9B7: --dhchap-ctrl-secret DHHC-1:02:ODI3NmRiNGQ5NzllZWIwZDQzMWMyNGFkYjZmY2M3NzhlZDg1MzI2NDY2OGI2ZWExEZsUaA==: 00:19:40.724 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.724 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.724 22:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.724 22:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.724 22:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.724 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.724 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.724 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.985 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:40.985 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.985 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.985 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:40.985 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:40.985 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.985 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.985 22:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.985 22:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.985 22:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.985 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.985 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.557 00:19:41.557 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.557 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.557 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.817 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.817 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.817 22:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.817 22:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.817 22:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.817 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.817 { 00:19:41.817 "cntlid": 141, 00:19:41.817 "qid": 0, 00:19:41.817 "state": "enabled", 00:19:41.817 "thread": "nvmf_tgt_poll_group_000", 00:19:41.817 "listen_address": { 00:19:41.817 "trtype": "TCP", 00:19:41.817 "adrfam": "IPv4", 00:19:41.817 "traddr": "10.0.0.2", 00:19:41.817 "trsvcid": "4420" 00:19:41.817 }, 00:19:41.817 "peer_address": { 00:19:41.817 "trtype": "TCP", 00:19:41.817 "adrfam": "IPv4", 00:19:41.817 "traddr": "10.0.0.1", 00:19:41.817 "trsvcid": "44120" 00:19:41.818 }, 00:19:41.818 "auth": { 00:19:41.818 "state": "completed", 00:19:41.818 "digest": "sha512", 00:19:41.818 "dhgroup": "ffdhe8192" 00:19:41.818 } 00:19:41.818 } 00:19:41.818 ]' 00:19:41.818 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.818 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.818 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.818 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.818 22:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.818 22:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.818 22:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.818 22:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.078 22:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDA0ODc4Y2M1YjVjYzU0NzQwODA3YTg2MDZjYTU1ODMzODNhMzhiYjcxY2Q5NGU52o8hBg==: --dhchap-ctrl-secret DHHC-1:01:NDgyNjRjNzhhY2YyNTQyNzg4ODhmMGMyNDNjNjgwOGQbj5zh: 00:19:42.708 22:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.708 22:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:42.709 22:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.709 22:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.709 22:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.709 22:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.709 22:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.709 22:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.969 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:42.969 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.969 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.969 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:42.969 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:42.969 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.969 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:42.969 22:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.969 22:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.969 22:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.969 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.969 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.540 00:19:43.540 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.540 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.540 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.540 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.540 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.540 22:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.540 22:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.540 22:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.540 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.540 { 00:19:43.540 "cntlid": 143, 00:19:43.540 "qid": 0, 00:19:43.540 "state": "enabled", 00:19:43.540 "thread": "nvmf_tgt_poll_group_000", 00:19:43.540 "listen_address": { 00:19:43.540 "trtype": "TCP", 00:19:43.540 "adrfam": "IPv4", 00:19:43.540 "traddr": "10.0.0.2", 00:19:43.540 "trsvcid": "4420" 00:19:43.540 }, 00:19:43.540 "peer_address": { 00:19:43.540 "trtype": "TCP", 00:19:43.540 "adrfam": "IPv4", 00:19:43.540 "traddr": "10.0.0.1", 00:19:43.540 "trsvcid": "35288" 00:19:43.540 }, 00:19:43.540 "auth": { 00:19:43.540 "state": "completed", 00:19:43.540 "digest": "sha512", 00:19:43.540 "dhgroup": "ffdhe8192" 00:19:43.540 } 00:19:43.540 } 00:19:43.540 ]' 00:19:43.540 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.800 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.800 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.800 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.800 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.800 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.800 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.800 22:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.060 22:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:19:44.630 22:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.630 22:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.630 22:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.630 22:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.630 22:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.630 22:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:44.630 22:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:44.630 22:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:44.630 22:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:44.630 22:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:44.630 22:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:45.109 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:45.109 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.109 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.109 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:45.109 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:45.109 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.109 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.109 22:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.109 22:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.109 22:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.109 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.109 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.369 00:19:45.369 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.369 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.369 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.629 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.629 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.629 22:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.629 22:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.629 22:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.629 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.629 { 00:19:45.629 "cntlid": 145, 00:19:45.629 "qid": 0, 00:19:45.629 "state": "enabled", 00:19:45.629 "thread": "nvmf_tgt_poll_group_000", 00:19:45.629 "listen_address": { 00:19:45.629 "trtype": "TCP", 00:19:45.629 "adrfam": "IPv4", 00:19:45.629 "traddr": "10.0.0.2", 00:19:45.629 "trsvcid": "4420" 00:19:45.629 }, 00:19:45.629 "peer_address": { 00:19:45.629 "trtype": "TCP", 00:19:45.629 "adrfam": "IPv4", 00:19:45.629 "traddr": "10.0.0.1", 00:19:45.629 "trsvcid": "35308" 00:19:45.629 }, 00:19:45.629 "auth": { 00:19:45.630 "state": "completed", 00:19:45.630 "digest": "sha512", 00:19:45.630 "dhgroup": "ffdhe8192" 00:19:45.630 } 00:19:45.630 } 00:19:45.630 ]' 00:19:45.630 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.630 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.630 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.630 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.630 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.630 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.630 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.630 22:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.890 22:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MGEwNDJiYTQ5YmE5ZmFlOTdiZWRhODFjYTVlMDI2NDc0MGM3M2ExMzYzOGQ3OTE4BxKSWw==: --dhchap-ctrl-secret DHHC-1:03:Y2IyYWZlNDQ1MDkyMDk4MGYzZjMwZmY4NjNhZDVjZDdmNGZmZDhlZDEyZjUzZWQzYjc3YmY3M2ZhODVkZjQxMrAnVt8=: 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:46.830 22:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:47.090 request: 00:19:47.090 { 00:19:47.090 "name": "nvme0", 00:19:47.090 "trtype": "tcp", 00:19:47.090 "traddr": "10.0.0.2", 00:19:47.090 "adrfam": "ipv4", 00:19:47.090 "trsvcid": "4420", 00:19:47.090 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:47.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:47.090 "prchk_reftag": false, 00:19:47.090 "prchk_guard": false, 00:19:47.090 "hdgst": false, 00:19:47.090 "ddgst": false, 00:19:47.090 "dhchap_key": "key2", 00:19:47.090 "method": "bdev_nvme_attach_controller", 00:19:47.090 "req_id": 1 00:19:47.090 } 00:19:47.090 Got JSON-RPC error response 00:19:47.090 response: 00:19:47.090 { 00:19:47.090 "code": -5, 00:19:47.090 "message": "Input/output error" 00:19:47.090 } 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:47.090 22:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:47.661 request: 00:19:47.661 { 00:19:47.661 "name": "nvme0", 00:19:47.661 "trtype": "tcp", 00:19:47.661 "traddr": "10.0.0.2", 00:19:47.661 "adrfam": "ipv4", 00:19:47.661 "trsvcid": "4420", 00:19:47.661 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:47.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:47.661 "prchk_reftag": false, 00:19:47.661 "prchk_guard": false, 00:19:47.661 "hdgst": false, 00:19:47.661 "ddgst": false, 00:19:47.661 "dhchap_key": "key1", 00:19:47.661 "dhchap_ctrlr_key": "ckey2", 00:19:47.661 "method": "bdev_nvme_attach_controller", 00:19:47.661 "req_id": 1 00:19:47.661 } 00:19:47.661 Got JSON-RPC error response 00:19:47.661 response: 00:19:47.661 { 00:19:47.661 "code": -5, 00:19:47.661 "message": "Input/output error" 00:19:47.661 } 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.661 22:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.231 request: 00:19:48.231 { 00:19:48.231 "name": "nvme0", 00:19:48.231 "trtype": "tcp", 00:19:48.231 "traddr": "10.0.0.2", 00:19:48.231 "adrfam": "ipv4", 00:19:48.231 "trsvcid": "4420", 00:19:48.231 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:48.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:48.231 "prchk_reftag": false, 00:19:48.231 "prchk_guard": false, 00:19:48.231 "hdgst": false, 00:19:48.231 "ddgst": false, 00:19:48.231 "dhchap_key": "key1", 00:19:48.231 "dhchap_ctrlr_key": "ckey1", 00:19:48.231 "method": "bdev_nvme_attach_controller", 00:19:48.231 "req_id": 1 00:19:48.231 } 00:19:48.231 Got JSON-RPC error response 00:19:48.231 response: 00:19:48.231 { 00:19:48.231 "code": -5, 00:19:48.231 "message": "Input/output error" 00:19:48.231 } 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2766588 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2766588 ']' 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2766588 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2766588 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2766588' 00:19:48.231 killing process with pid 2766588 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2766588 00:19:48.231 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2766588 00:19:48.491 22:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:48.491 22:17:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:48.491 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:48.491 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.491 22:17:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2793424 00:19:48.491 22:17:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2793424 00:19:48.492 22:17:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:48.492 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2793424 ']' 00:19:48.492 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.492 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.492 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.492 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.492 22:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2793424 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2793424 ']' 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.432 22:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.004 00:19:50.004 22:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.004 22:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.004 22:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.266 22:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.266 22:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.266 22:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.266 22:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.266 22:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.266 22:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.266 { 00:19:50.266 "cntlid": 1, 00:19:50.266 "qid": 0, 00:19:50.266 "state": "enabled", 00:19:50.266 "thread": "nvmf_tgt_poll_group_000", 00:19:50.266 "listen_address": { 00:19:50.266 "trtype": "TCP", 00:19:50.266 "adrfam": "IPv4", 00:19:50.266 "traddr": "10.0.0.2", 00:19:50.266 "trsvcid": "4420" 00:19:50.266 }, 00:19:50.266 "peer_address": { 00:19:50.266 "trtype": "TCP", 00:19:50.266 "adrfam": "IPv4", 00:19:50.266 "traddr": "10.0.0.1", 00:19:50.266 "trsvcid": "35372" 00:19:50.266 }, 00:19:50.266 "auth": { 00:19:50.266 "state": "completed", 00:19:50.266 "digest": "sha512", 00:19:50.266 "dhgroup": "ffdhe8192" 00:19:50.266 } 00:19:50.266 } 00:19:50.266 ]' 00:19:50.266 22:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.266 22:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.266 22:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.266 22:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:50.266 22:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.266 22:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.266 22:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.266 22:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.527 22:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OTgyYzhiNDc0NTliZmYyZTc3YWU3ZGYwNzI0ZDIwOTdlMWI1OTRiZjc0YjljZjQyNDMxN2JjZDYwZmY0NGU4YcvVo98=: 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.470 22:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.470 request: 00:19:51.470 { 00:19:51.470 "name": "nvme0", 00:19:51.470 "trtype": "tcp", 00:19:51.470 "traddr": "10.0.0.2", 00:19:51.470 "adrfam": "ipv4", 00:19:51.470 "trsvcid": "4420", 00:19:51.470 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:51.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:51.470 "prchk_reftag": false, 00:19:51.470 "prchk_guard": false, 00:19:51.470 "hdgst": false, 00:19:51.470 "ddgst": false, 00:19:51.470 "dhchap_key": "key3", 00:19:51.470 "method": "bdev_nvme_attach_controller", 00:19:51.470 "req_id": 1 00:19:51.471 } 00:19:51.471 Got JSON-RPC error response 00:19:51.471 response: 00:19:51.471 { 00:19:51.471 "code": -5, 00:19:51.471 "message": "Input/output error" 00:19:51.471 } 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.731 22:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.991 request: 00:19:51.991 { 00:19:51.991 "name": "nvme0", 00:19:51.991 "trtype": "tcp", 00:19:51.991 "traddr": "10.0.0.2", 00:19:51.991 "adrfam": "ipv4", 00:19:51.991 "trsvcid": "4420", 00:19:51.991 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:51.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:51.991 "prchk_reftag": false, 00:19:51.991 "prchk_guard": false, 00:19:51.991 "hdgst": false, 00:19:51.991 "ddgst": false, 00:19:51.991 "dhchap_key": "key3", 00:19:51.991 "method": "bdev_nvme_attach_controller", 00:19:51.991 "req_id": 1 00:19:51.991 } 00:19:51.991 Got JSON-RPC error response 00:19:51.991 response: 00:19:51.991 { 00:19:51.991 "code": -5, 00:19:51.991 "message": "Input/output error" 00:19:51.991 } 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:51.991 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:52.252 request: 00:19:52.252 { 00:19:52.252 "name": "nvme0", 00:19:52.253 "trtype": "tcp", 00:19:52.253 "traddr": "10.0.0.2", 00:19:52.253 "adrfam": "ipv4", 00:19:52.253 "trsvcid": "4420", 00:19:52.253 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:52.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:52.253 "prchk_reftag": false, 00:19:52.253 "prchk_guard": false, 00:19:52.253 "hdgst": false, 00:19:52.253 "ddgst": false, 00:19:52.253 "dhchap_key": "key0", 00:19:52.253 "dhchap_ctrlr_key": "key1", 00:19:52.253 "method": "bdev_nvme_attach_controller", 00:19:52.253 "req_id": 1 00:19:52.253 } 00:19:52.253 Got JSON-RPC error response 00:19:52.253 response: 00:19:52.253 { 00:19:52.253 "code": -5, 00:19:52.253 "message": "Input/output error" 00:19:52.253 } 00:19:52.253 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:52.253 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:52.253 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:52.253 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:52.253 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:52.253 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:52.513 00:19:52.513 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:52.513 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:52.513 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.513 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.513 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.513 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.774 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:52.774 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:52.774 22:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2766639 00:19:52.774 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2766639 ']' 00:19:52.774 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2766639 00:19:52.774 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:52.774 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.774 22:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2766639 00:19:52.774 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:52.774 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:52.774 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2766639' 00:19:52.774 killing process with pid 2766639 00:19:52.774 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2766639 00:19:52.774 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2766639 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.034 rmmod nvme_tcp 00:19:53.034 rmmod nvme_fabrics 00:19:53.034 rmmod nvme_keyring 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2793424 ']' 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2793424 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2793424 ']' 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2793424 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2793424 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2793424' 00:19:53.034 killing process with pid 2793424 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2793424 00:19:53.034 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2793424 00:19:53.294 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:53.294 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:53.294 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:53.294 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.294 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:53.294 22:17:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.294 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.294 22:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.839 22:17:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:55.839 22:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.UVQ /tmp/spdk.key-sha256.pxd /tmp/spdk.key-sha384.f1Z /tmp/spdk.key-sha512.0tk /tmp/spdk.key-sha512.EhW /tmp/spdk.key-sha384.R6C /tmp/spdk.key-sha256.eSx '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:55.839 00:19:55.839 real 2m24.305s 00:19:55.839 user 5m20.852s 00:19:55.839 sys 0m21.446s 00:19:55.839 22:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:55.839 22:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.839 ************************************ 00:19:55.839 END TEST nvmf_auth_target 00:19:55.839 ************************************ 00:19:55.839 22:17:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:55.839 22:17:20 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:55.839 22:17:20 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:55.839 22:17:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:55.839 22:17:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.839 22:17:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:55.839 ************************************ 00:19:55.839 START TEST nvmf_bdevio_no_huge 00:19:55.839 ************************************ 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:55.839 * Looking for test storage... 00:19:55.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:55.839 22:17:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:02.462 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:02.462 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.462 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:02.463 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:02.463 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:02.463 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:02.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:20:02.723 00:20:02.723 --- 10.0.0.2 ping statistics --- 00:20:02.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.723 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:20:02.723 00:20:02.723 --- 10.0.0.1 ping statistics --- 00:20:02.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.723 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2798474 00:20:02.723 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2798474 00:20:02.724 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:02.724 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2798474 ']' 00:20:02.724 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.724 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.724 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.724 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.724 22:17:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.724 [2024-07-15 22:17:27.996531] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:20:02.724 [2024-07-15 22:17:27.996620] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:02.984 [2024-07-15 22:17:28.098810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.984 [2024-07-15 22:17:28.207675] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.984 [2024-07-15 22:17:28.207727] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.984 [2024-07-15 22:17:28.207735] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.984 [2024-07-15 22:17:28.207742] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.984 [2024-07-15 22:17:28.207749] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.984 [2024-07-15 22:17:28.207924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:02.984 [2024-07-15 22:17:28.208200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:02.984 [2024-07-15 22:17:28.208219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.984 [2024-07-15 22:17:28.208035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.554 [2024-07-15 22:17:28.836588] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.554 Malloc0 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.554 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.813 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.813 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:03.813 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.813 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.813 [2024-07-15 22:17:28.890437] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.813 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.813 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:03.813 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:03.813 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:03.813 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:03.813 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.813 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.813 { 00:20:03.813 "params": { 00:20:03.813 "name": "Nvme$subsystem", 00:20:03.813 "trtype": "$TEST_TRANSPORT", 00:20:03.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.813 "adrfam": "ipv4", 00:20:03.813 "trsvcid": "$NVMF_PORT", 00:20:03.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.813 "hdgst": ${hdgst:-false}, 00:20:03.813 "ddgst": ${ddgst:-false} 00:20:03.813 }, 00:20:03.813 "method": "bdev_nvme_attach_controller" 00:20:03.813 } 00:20:03.813 EOF 00:20:03.813 )") 00:20:03.813 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:03.813 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:03.813 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:03.813 22:17:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:03.813 "params": { 00:20:03.813 "name": "Nvme1", 00:20:03.813 "trtype": "tcp", 00:20:03.813 "traddr": "10.0.0.2", 00:20:03.813 "adrfam": "ipv4", 00:20:03.814 "trsvcid": "4420", 00:20:03.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.814 "hdgst": false, 00:20:03.814 "ddgst": false 00:20:03.814 }, 00:20:03.814 "method": "bdev_nvme_attach_controller" 00:20:03.814 }' 00:20:03.814 [2024-07-15 22:17:28.955818] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:20:03.814 [2024-07-15 22:17:28.955904] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2798679 ] 00:20:03.814 [2024-07-15 22:17:29.026369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:03.814 [2024-07-15 22:17:29.124619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.814 [2024-07-15 22:17:29.124741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.814 [2024-07-15 22:17:29.124744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.096 I/O targets: 00:20:04.096 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:04.096 00:20:04.096 00:20:04.096 CUnit - A unit testing framework for C - Version 2.1-3 00:20:04.096 http://cunit.sourceforge.net/ 00:20:04.096 00:20:04.096 00:20:04.096 Suite: bdevio tests on: Nvme1n1 00:20:04.096 Test: blockdev write read block ...passed 00:20:04.096 Test: blockdev write zeroes read block ...passed 00:20:04.096 Test: blockdev write zeroes read no split ...passed 00:20:04.355 Test: blockdev write zeroes read split ...passed 00:20:04.355 Test: blockdev write zeroes read split partial ...passed 00:20:04.355 Test: blockdev reset ...[2024-07-15 22:17:29.462519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:04.355 [2024-07-15 22:17:29.462573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b6c10 (9): Bad file descriptor 00:20:04.355 [2024-07-15 22:17:29.479299] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:04.355 passed 00:20:04.355 Test: blockdev write read 8 blocks ...passed 00:20:04.355 Test: blockdev write read size > 128k ...passed 00:20:04.355 Test: blockdev write read invalid size ...passed 00:20:04.355 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:04.355 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:04.355 Test: blockdev write read max offset ...passed 00:20:04.355 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:04.355 Test: blockdev writev readv 8 blocks ...passed 00:20:04.355 Test: blockdev writev readv 30 x 1block ...passed 00:20:04.615 Test: blockdev writev readv block ...passed 00:20:04.615 Test: blockdev writev readv size > 128k ...passed 00:20:04.615 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:04.615 Test: blockdev comparev and writev ...[2024-07-15 22:17:29.701818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.615 [2024-07-15 22:17:29.701841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:04.615 [2024-07-15 22:17:29.701852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.615 [2024-07-15 22:17:29.701858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.615 [2024-07-15 22:17:29.702236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.615 [2024-07-15 22:17:29.702244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:04.615 [2024-07-15 22:17:29.702254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.615 [2024-07-15 22:17:29.702259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:04.615 [2024-07-15 22:17:29.702661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.615 [2024-07-15 22:17:29.702668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:04.615 [2024-07-15 22:17:29.702677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.615 [2024-07-15 22:17:29.702682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:04.615 [2024-07-15 22:17:29.703086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.615 [2024-07-15 22:17:29.703094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:04.615 [2024-07-15 22:17:29.703103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.615 [2024-07-15 22:17:29.703108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:04.615 passed 00:20:04.615 Test: blockdev nvme passthru rw ...passed 00:20:04.615 Test: blockdev nvme passthru vendor specific ...[2024-07-15 22:17:29.787680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:04.615 [2024-07-15 22:17:29.787691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:04.615 [2024-07-15 22:17:29.787950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:04.615 [2024-07-15 22:17:29.787957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:04.615 [2024-07-15 22:17:29.788224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:04.615 [2024-07-15 22:17:29.788234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:04.615 [2024-07-15 22:17:29.788524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:04.615 [2024-07-15 22:17:29.788531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:04.615 passed 00:20:04.615 Test: blockdev nvme admin passthru ...passed 00:20:04.615 Test: blockdev copy ...passed 00:20:04.615 00:20:04.615 Run Summary: Type Total Ran Passed Failed Inactive 00:20:04.615 suites 1 1 n/a 0 0 00:20:04.615 tests 23 23 23 0 0 00:20:04.615 asserts 152 152 152 0 n/a 00:20:04.615 00:20:04.615 Elapsed time = 1.077 seconds 00:20:04.875 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.875 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.875 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:04.875 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.875 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:04.875 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:04.875 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:04.875 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:04.875 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:04.875 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:04.875 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:04.875 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:04.875 rmmod nvme_tcp 00:20:04.875 rmmod nvme_fabrics 00:20:04.875 rmmod nvme_keyring 00:20:05.134 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.134 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:05.134 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:05.134 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2798474 ']' 00:20:05.134 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2798474 00:20:05.134 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2798474 ']' 00:20:05.134 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2798474 00:20:05.134 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:05.134 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.134 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2798474 00:20:05.134 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:05.134 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:05.134 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2798474' 00:20:05.134 killing process with pid 2798474 00:20:05.134 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2798474 00:20:05.134 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2798474 00:20:05.393 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:05.393 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:05.393 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:05.394 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.394 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:05.394 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.394 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.394 22:17:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.930 22:17:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:07.930 00:20:07.930 real 0m12.002s 00:20:07.930 user 0m13.275s 00:20:07.930 sys 0m6.316s 00:20:07.930 22:17:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:07.930 22:17:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:07.930 ************************************ 00:20:07.930 END TEST nvmf_bdevio_no_huge 00:20:07.930 ************************************ 00:20:07.930 22:17:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:07.930 22:17:32 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:07.930 22:17:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:07.930 22:17:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:07.930 22:17:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:07.930 ************************************ 00:20:07.930 START TEST nvmf_tls 00:20:07.930 ************************************ 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:07.930 * Looking for test storage... 00:20:07.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:07.930 22:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.512 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.512 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:14.512 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:14.512 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:14.513 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:14.513 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:14.513 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:14.513 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:14.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:20:14.513 00:20:14.513 --- 10.0.0.2 ping statistics --- 00:20:14.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.513 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:20:14.513 00:20:14.513 --- 10.0.0.1 ping statistics --- 00:20:14.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.513 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2802968 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2802968 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2802968 ']' 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.513 22:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.513 [2024-07-15 22:17:39.615866] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:20:14.513 [2024-07-15 22:17:39.615931] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.513 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.513 [2024-07-15 22:17:39.707841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.513 [2024-07-15 22:17:39.800285] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.513 [2024-07-15 22:17:39.800342] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.513 [2024-07-15 22:17:39.800351] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.513 [2024-07-15 22:17:39.800357] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.513 [2024-07-15 22:17:39.800363] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.513 [2024-07-15 22:17:39.800393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.085 22:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.085 22:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:15.085 22:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:15.085 22:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:15.085 22:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.346 22:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.346 22:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:15.346 22:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:15.346 true 00:20:15.346 22:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:15.346 22:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:15.607 22:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:15.607 22:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:15.607 22:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:15.868 22:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:15.868 22:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:15.868 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:15.868 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:15.868 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:16.130 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:16.130 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:16.390 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:16.390 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:16.390 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:16.390 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:16.391 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:16.391 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:16.391 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:16.651 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:16.651 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:16.912 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:16.912 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:16.912 22:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:16.912 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:16.912 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.OyyQ6uttse 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.gLlyBK6Xmw 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.OyyQ6uttse 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.gLlyBK6Xmw 00:20:17.173 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:17.433 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:17.693 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.OyyQ6uttse 00:20:17.693 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.OyyQ6uttse 00:20:17.693 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:17.693 [2024-07-15 22:17:42.968410] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.693 22:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:17.952 22:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:17.952 [2024-07-15 22:17:43.277165] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.211 [2024-07-15 22:17:43.277349] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.211 22:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:18.211 malloc0 00:20:18.211 22:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:18.470 22:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OyyQ6uttse 00:20:18.470 [2024-07-15 22:17:43.708221] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:18.471 22:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.OyyQ6uttse 00:20:18.471 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.514 Initializing NVMe Controllers 00:20:28.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:28.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:28.514 Initialization complete. Launching workers. 00:20:28.514 ======================================================== 00:20:28.514 Latency(us) 00:20:28.514 Device Information : IOPS MiB/s Average min max 00:20:28.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18993.59 74.19 3369.58 1162.76 6488.98 00:20:28.514 ======================================================== 00:20:28.514 Total : 18993.59 74.19 3369.58 1162.76 6488.98 00:20:28.514 00:20:28.514 22:17:53 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OyyQ6uttse 00:20:28.514 22:17:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:28.514 22:17:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:28.514 22:17:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:28.514 22:17:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OyyQ6uttse' 00:20:28.514 22:17:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:28.514 22:17:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2805848 00:20:28.514 22:17:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:28.514 22:17:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2805848 /var/tmp/bdevperf.sock 00:20:28.514 22:17:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:28.514 22:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2805848 ']' 00:20:28.514 22:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.514 22:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.514 22:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.514 22:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.514 22:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.773 [2024-07-15 22:17:53.890419] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:20:28.773 [2024-07-15 22:17:53.890475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2805848 ] 00:20:28.773 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.773 [2024-07-15 22:17:53.939313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.773 [2024-07-15 22:17:53.991722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.343 22:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.344 22:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:29.344 22:17:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OyyQ6uttse 00:20:29.604 [2024-07-15 22:17:54.800374] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:29.604 [2024-07-15 22:17:54.800430] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:29.604 TLSTESTn1 00:20:29.604 22:17:54 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:29.865 Running I/O for 10 seconds... 00:20:39.861 00:20:39.862 Latency(us) 00:20:39.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.862 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:39.862 Verification LBA range: start 0x0 length 0x2000 00:20:39.862 TLSTESTn1 : 10.05 2458.01 9.60 0.00 0.00 51944.73 6198.61 111411.20 00:20:39.862 =================================================================================================================== 00:20:39.862 Total : 2458.01 9.60 0.00 0.00 51944.73 6198.61 111411.20 00:20:39.862 0 00:20:39.862 22:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:39.862 22:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2805848 00:20:39.862 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2805848 ']' 00:20:39.862 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2805848 00:20:39.862 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:39.862 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.862 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2805848 00:20:39.862 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:39.862 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:39.862 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2805848' 00:20:39.862 killing process with pid 2805848 00:20:39.862 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2805848 00:20:39.862 Received shutdown signal, test time was about 10.000000 seconds 00:20:39.862 00:20:39.862 Latency(us) 00:20:39.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.862 =================================================================================================================== 00:20:39.862 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.862 [2024-07-15 22:18:05.137726] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:39.862 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2805848 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gLlyBK6Xmw 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gLlyBK6Xmw 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gLlyBK6Xmw 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gLlyBK6Xmw' 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2808012 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2808012 /var/tmp/bdevperf.sock 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2808012 ']' 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.122 22:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.122 [2024-07-15 22:18:05.303552] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:20:40.122 [2024-07-15 22:18:05.303610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808012 ] 00:20:40.122 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.122 [2024-07-15 22:18:05.354278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.122 [2024-07-15 22:18:05.405035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.064 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.064 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:41.064 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gLlyBK6Xmw 00:20:41.064 [2024-07-15 22:18:06.217990] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.064 [2024-07-15 22:18:06.218048] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:41.065 [2024-07-15 22:18:06.223926] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:41.065 [2024-07-15 22:18:06.224095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2300ec0 (107): Transport endpoint is not connected 00:20:41.065 [2024-07-15 22:18:06.225032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2300ec0 (9): Bad file descriptor 00:20:41.065 [2024-07-15 22:18:06.226033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.065 [2024-07-15 22:18:06.226040] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:41.065 [2024-07-15 22:18:06.226047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.065 request: 00:20:41.065 { 00:20:41.065 "name": "TLSTEST", 00:20:41.065 "trtype": "tcp", 00:20:41.065 "traddr": "10.0.0.2", 00:20:41.065 "adrfam": "ipv4", 00:20:41.065 "trsvcid": "4420", 00:20:41.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.065 "prchk_reftag": false, 00:20:41.065 "prchk_guard": false, 00:20:41.065 "hdgst": false, 00:20:41.065 "ddgst": false, 00:20:41.065 "psk": "/tmp/tmp.gLlyBK6Xmw", 00:20:41.065 "method": "bdev_nvme_attach_controller", 00:20:41.065 "req_id": 1 00:20:41.065 } 00:20:41.065 Got JSON-RPC error response 00:20:41.065 response: 00:20:41.065 { 00:20:41.065 "code": -5, 00:20:41.065 "message": "Input/output error" 00:20:41.065 } 00:20:41.065 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2808012 00:20:41.065 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2808012 ']' 00:20:41.065 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2808012 00:20:41.065 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:41.065 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:41.065 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2808012 00:20:41.065 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:41.065 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:41.065 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2808012' 00:20:41.065 killing process with pid 2808012 00:20:41.065 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2808012 00:20:41.065 Received shutdown signal, test time was about 10.000000 seconds 00:20:41.065 00:20:41.065 Latency(us) 00:20:41.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.065 =================================================================================================================== 00:20:41.065 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:41.065 [2024-07-15 22:18:06.311794] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:41.065 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2808012 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OyyQ6uttse 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OyyQ6uttse 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OyyQ6uttse 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OyyQ6uttse' 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2808349 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2808349 /var/tmp/bdevperf.sock 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2808349 ']' 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.326 [2024-07-15 22:18:06.468552] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:20:41.326 [2024-07-15 22:18:06.468612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808349 ] 00:20:41.326 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.326 [2024-07-15 22:18:06.518382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.326 [2024-07-15 22:18:06.570044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:41.326 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.OyyQ6uttse 00:20:41.587 [2024-07-15 22:18:06.773667] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.587 [2024-07-15 22:18:06.773733] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:41.587 [2024-07-15 22:18:06.777912] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:41.587 [2024-07-15 22:18:06.777940] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:41.587 [2024-07-15 22:18:06.777965] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:41.587 [2024-07-15 22:18:06.778595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2270ec0 (107): Transport endpoint is not connected 00:20:41.587 [2024-07-15 22:18:06.779589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2270ec0 (9): Bad file descriptor 00:20:41.587 [2024-07-15 22:18:06.780590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.587 [2024-07-15 22:18:06.780596] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:41.587 [2024-07-15 22:18:06.780603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.587 request: 00:20:41.587 { 00:20:41.587 "name": "TLSTEST", 00:20:41.587 "trtype": "tcp", 00:20:41.587 "traddr": "10.0.0.2", 00:20:41.587 "adrfam": "ipv4", 00:20:41.587 "trsvcid": "4420", 00:20:41.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.587 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:41.587 "prchk_reftag": false, 00:20:41.587 "prchk_guard": false, 00:20:41.587 "hdgst": false, 00:20:41.587 "ddgst": false, 00:20:41.587 "psk": "/tmp/tmp.OyyQ6uttse", 00:20:41.587 "method": "bdev_nvme_attach_controller", 00:20:41.587 "req_id": 1 00:20:41.587 } 00:20:41.587 Got JSON-RPC error response 00:20:41.587 response: 00:20:41.587 { 00:20:41.587 "code": -5, 00:20:41.587 "message": "Input/output error" 00:20:41.587 } 00:20:41.587 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2808349 00:20:41.587 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2808349 ']' 00:20:41.587 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2808349 00:20:41.587 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:41.587 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:41.587 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2808349 00:20:41.587 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:41.587 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:41.587 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2808349' 00:20:41.587 killing process with pid 2808349 00:20:41.587 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2808349 00:20:41.587 Received shutdown signal, test time was about 10.000000 seconds 00:20:41.587 00:20:41.587 Latency(us) 00:20:41.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.587 =================================================================================================================== 00:20:41.587 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:41.587 [2024-07-15 22:18:06.865549] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:41.587 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2808349 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OyyQ6uttse 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OyyQ6uttse 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OyyQ6uttse 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OyyQ6uttse' 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2808364 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2808364 /var/tmp/bdevperf.sock 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2808364 ']' 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.848 22:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.848 [2024-07-15 22:18:07.031513] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:20:41.848 [2024-07-15 22:18:07.031568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808364 ] 00:20:41.848 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.848 [2024-07-15 22:18:07.082372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.848 [2024-07-15 22:18:07.134043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.788 22:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:42.788 22:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:42.788 22:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OyyQ6uttse 00:20:42.788 [2024-07-15 22:18:07.934954] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:42.788 [2024-07-15 22:18:07.935017] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:42.788 [2024-07-15 22:18:07.939362] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:42.788 [2024-07-15 22:18:07.939380] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:42.788 [2024-07-15 22:18:07.939400] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:42.788 [2024-07-15 22:18:07.940056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14daec0 (107): Transport endpoint is not connected 00:20:42.788 [2024-07-15 22:18:07.941051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14daec0 (9): Bad file descriptor 00:20:42.788 [2024-07-15 22:18:07.942052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:42.788 [2024-07-15 22:18:07.942058] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:42.788 [2024-07-15 22:18:07.942065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:42.789 request: 00:20:42.789 { 00:20:42.789 "name": "TLSTEST", 00:20:42.789 "trtype": "tcp", 00:20:42.789 "traddr": "10.0.0.2", 00:20:42.789 "adrfam": "ipv4", 00:20:42.789 "trsvcid": "4420", 00:20:42.789 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:42.789 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.789 "prchk_reftag": false, 00:20:42.789 "prchk_guard": false, 00:20:42.789 "hdgst": false, 00:20:42.789 "ddgst": false, 00:20:42.789 "psk": "/tmp/tmp.OyyQ6uttse", 00:20:42.789 "method": "bdev_nvme_attach_controller", 00:20:42.789 "req_id": 1 00:20:42.789 } 00:20:42.789 Got JSON-RPC error response 00:20:42.789 response: 00:20:42.789 { 00:20:42.789 "code": -5, 00:20:42.789 "message": "Input/output error" 00:20:42.789 } 00:20:42.789 22:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2808364 00:20:42.789 22:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2808364 ']' 00:20:42.789 22:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2808364 00:20:42.789 22:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:42.789 22:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.789 22:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2808364 00:20:42.789 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:42.789 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:42.789 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2808364' 00:20:42.789 killing process with pid 2808364 00:20:42.789 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2808364 00:20:42.789 Received shutdown signal, test time was about 10.000000 seconds 00:20:42.789 00:20:42.789 Latency(us) 00:20:42.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.789 =================================================================================================================== 00:20:42.789 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:42.789 [2024-07-15 22:18:08.025450] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:42.789 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2808364 00:20:43.048 22:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:43.048 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:43.048 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:43.048 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:43.048 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:43.048 22:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:43.048 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:43.048 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:43.048 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:43.048 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2808704 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2808704 /var/tmp/bdevperf.sock 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2808704 ']' 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.049 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.049 [2024-07-15 22:18:08.195755] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:20:43.049 [2024-07-15 22:18:08.195823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808704 ] 00:20:43.049 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.049 [2024-07-15 22:18:08.245998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.049 [2024-07-15 22:18:08.297476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.990 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.990 22:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:43.990 22:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:43.990 [2024-07-15 22:18:09.100872] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:43.990 [2024-07-15 22:18:09.102991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8574a0 (9): Bad file descriptor 00:20:43.990 [2024-07-15 22:18:09.103990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:43.990 [2024-07-15 22:18:09.103997] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:43.990 [2024-07-15 22:18:09.104004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:43.990 request: 00:20:43.990 { 00:20:43.990 "name": "TLSTEST", 00:20:43.990 "trtype": "tcp", 00:20:43.990 "traddr": "10.0.0.2", 00:20:43.990 "adrfam": "ipv4", 00:20:43.990 "trsvcid": "4420", 00:20:43.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:43.990 "prchk_reftag": false, 00:20:43.990 "prchk_guard": false, 00:20:43.990 "hdgst": false, 00:20:43.990 "ddgst": false, 00:20:43.990 "method": "bdev_nvme_attach_controller", 00:20:43.990 "req_id": 1 00:20:43.990 } 00:20:43.990 Got JSON-RPC error response 00:20:43.990 response: 00:20:43.990 { 00:20:43.990 "code": -5, 00:20:43.990 "message": "Input/output error" 00:20:43.990 } 00:20:43.990 22:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2808704 00:20:43.990 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2808704 ']' 00:20:43.990 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2808704 00:20:43.990 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:43.990 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.990 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2808704 00:20:43.990 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:43.990 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:43.990 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2808704' 00:20:43.990 killing process with pid 2808704 00:20:43.990 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2808704 00:20:43.990 Received shutdown signal, test time was about 10.000000 seconds 00:20:43.990 00:20:43.990 Latency(us) 00:20:43.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.991 =================================================================================================================== 00:20:43.991 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:43.991 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2808704 00:20:43.991 22:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:43.991 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:43.991 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:43.991 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:43.991 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:43.991 22:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2802968 00:20:43.991 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2802968 ']' 00:20:43.991 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2802968 00:20:43.991 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:43.991 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.991 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2802968 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2802968' 00:20:44.252 killing process with pid 2802968 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2802968 00:20:44.252 [2024-07-15 22:18:09.352370] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2802968 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.tz3RWmovQo 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.tz3RWmovQo 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2809231 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2809231 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2809231 ']' 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.252 22:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.513 [2024-07-15 22:18:09.594647] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:20:44.513 [2024-07-15 22:18:09.594716] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.513 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.513 [2024-07-15 22:18:09.679344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.513 [2024-07-15 22:18:09.739263] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.513 [2024-07-15 22:18:09.739297] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.513 [2024-07-15 22:18:09.739303] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.513 [2024-07-15 22:18:09.739307] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.513 [2024-07-15 22:18:09.739311] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.513 [2024-07-15 22:18:09.739326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.085 22:18:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.085 22:18:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:45.085 22:18:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.085 22:18:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:45.085 22:18:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.085 22:18:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.085 22:18:10 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.tz3RWmovQo 00:20:45.085 22:18:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tz3RWmovQo 00:20:45.085 22:18:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:45.346 [2024-07-15 22:18:10.530628] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.346 22:18:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:45.607 22:18:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:45.607 [2024-07-15 22:18:10.827352] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:45.607 [2024-07-15 22:18:10.827522] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.607 22:18:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:45.868 malloc0 00:20:45.868 22:18:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:45.868 22:18:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tz3RWmovQo 00:20:46.128 [2024-07-15 22:18:11.262427] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:46.128 22:18:11 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tz3RWmovQo 00:20:46.128 22:18:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:46.128 22:18:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:46.128 22:18:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:46.128 22:18:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tz3RWmovQo' 00:20:46.128 22:18:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:46.128 22:18:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2809856 00:20:46.128 22:18:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:46.128 22:18:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2809856 /var/tmp/bdevperf.sock 00:20:46.128 22:18:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:46.128 22:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2809856 ']' 00:20:46.128 22:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.128 22:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.128 22:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.128 22:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.128 22:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.128 [2024-07-15 22:18:11.328163] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:20:46.128 [2024-07-15 22:18:11.328215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2809856 ] 00:20:46.128 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.128 [2024-07-15 22:18:11.377221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.129 [2024-07-15 22:18:11.429333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.068 22:18:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.068 22:18:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:47.068 22:18:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tz3RWmovQo 00:20:47.068 [2024-07-15 22:18:12.242184] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.068 [2024-07-15 22:18:12.242240] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:47.068 TLSTESTn1 00:20:47.068 22:18:12 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:47.327 Running I/O for 10 seconds... 00:20:57.368 00:20:57.368 Latency(us) 00:20:57.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.368 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:57.368 Verification LBA range: start 0x0 length 0x2000 00:20:57.368 TLSTESTn1 : 10.08 2402.66 9.39 0.00 0.00 53083.57 5488.64 99177.81 00:20:57.368 =================================================================================================================== 00:20:57.368 Total : 2402.66 9.39 0.00 0.00 53083.57 5488.64 99177.81 00:20:57.368 0 00:20:57.368 22:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.368 22:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2809856 00:20:57.368 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2809856 ']' 00:20:57.368 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2809856 00:20:57.368 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:57.368 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.368 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2809856 00:20:57.368 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:57.368 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:57.368 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2809856' 00:20:57.368 killing process with pid 2809856 00:20:57.368 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2809856 00:20:57.368 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.368 00:20:57.368 Latency(us) 00:20:57.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.368 =================================================================================================================== 00:20:57.368 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.368 [2024-07-15 22:18:22.597751] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:57.368 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2809856 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.tz3RWmovQo 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tz3RWmovQo 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tz3RWmovQo 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tz3RWmovQo 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tz3RWmovQo' 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2811906 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2811906 /var/tmp/bdevperf.sock 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2811906 ']' 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.630 22:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.630 [2024-07-15 22:18:22.773363] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:20:57.630 [2024-07-15 22:18:22.773418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2811906 ] 00:20:57.630 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.630 [2024-07-15 22:18:22.822368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.630 [2024-07-15 22:18:22.874468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.570 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.570 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:58.570 22:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tz3RWmovQo 00:20:58.570 [2024-07-15 22:18:23.719450] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.570 [2024-07-15 22:18:23.719485] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:58.570 [2024-07-15 22:18:23.719490] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.tz3RWmovQo 00:20:58.570 request: 00:20:58.570 { 00:20:58.570 "name": "TLSTEST", 00:20:58.570 "trtype": "tcp", 00:20:58.570 "traddr": "10.0.0.2", 00:20:58.570 "adrfam": "ipv4", 00:20:58.570 "trsvcid": "4420", 00:20:58.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.570 "prchk_reftag": false, 00:20:58.570 "prchk_guard": false, 00:20:58.570 "hdgst": false, 00:20:58.570 "ddgst": false, 00:20:58.570 "psk": "/tmp/tmp.tz3RWmovQo", 00:20:58.570 "method": "bdev_nvme_attach_controller", 00:20:58.570 "req_id": 1 00:20:58.570 } 00:20:58.570 Got JSON-RPC error response 00:20:58.570 response: 00:20:58.570 { 00:20:58.570 "code": -1, 00:20:58.570 "message": "Operation not permitted" 00:20:58.570 } 00:20:58.570 22:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2811906 00:20:58.570 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2811906 ']' 00:20:58.570 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2811906 00:20:58.570 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:58.570 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.570 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2811906 00:20:58.570 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:58.570 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:58.570 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2811906' 00:20:58.570 killing process with pid 2811906 00:20:58.570 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2811906 00:20:58.570 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.570 00:20:58.570 Latency(us) 00:20:58.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.570 =================================================================================================================== 00:20:58.570 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:58.570 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2811906 00:20:58.830 22:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:58.830 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:58.830 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:58.830 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:58.830 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:58.830 22:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2809231 00:20:58.830 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2809231 ']' 00:20:58.830 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2809231 00:20:58.830 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:58.830 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.830 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2809231 00:20:58.830 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:58.830 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:58.830 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2809231' 00:20:58.830 killing process with pid 2809231 00:20:58.830 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2809231 00:20:58.830 [2024-07-15 22:18:23.966531] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:58.830 22:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2809231 00:20:58.830 22:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:58.830 22:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:58.830 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:58.830 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.830 22:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2812244 00:20:58.830 22:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2812244 00:20:58.830 22:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:58.830 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2812244 ']' 00:20:58.830 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.830 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.830 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.830 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.830 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.830 [2024-07-15 22:18:24.140551] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:20:58.830 [2024-07-15 22:18:24.140603] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.090 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.090 [2024-07-15 22:18:24.220707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.090 [2024-07-15 22:18:24.274484] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.090 [2024-07-15 22:18:24.274514] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.090 [2024-07-15 22:18:24.274519] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.090 [2024-07-15 22:18:24.274523] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.090 [2024-07-15 22:18:24.274527] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.090 [2024-07-15 22:18:24.274549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.660 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.660 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:59.660 22:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:59.660 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:59.660 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.660 22:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.660 22:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.tz3RWmovQo 00:20:59.660 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:59.660 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.tz3RWmovQo 00:20:59.660 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:59.660 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.660 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:59.660 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.660 22:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.tz3RWmovQo 00:20:59.660 22:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tz3RWmovQo 00:20:59.660 22:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:59.920 [2024-07-15 22:18:25.100106] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.920 22:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:00.180 22:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:00.180 [2024-07-15 22:18:25.392822] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:00.180 [2024-07-15 22:18:25.392996] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.180 22:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:00.440 malloc0 00:21:00.440 22:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:00.440 22:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tz3RWmovQo 00:21:00.700 [2024-07-15 22:18:25.859917] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:00.700 [2024-07-15 22:18:25.859936] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:00.700 [2024-07-15 22:18:25.859956] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:00.700 request: 00:21:00.700 { 00:21:00.700 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.700 "host": "nqn.2016-06.io.spdk:host1", 00:21:00.700 "psk": "/tmp/tmp.tz3RWmovQo", 00:21:00.700 "method": "nvmf_subsystem_add_host", 00:21:00.700 "req_id": 1 00:21:00.700 } 00:21:00.700 Got JSON-RPC error response 00:21:00.700 response: 00:21:00.700 { 00:21:00.700 "code": -32603, 00:21:00.700 "message": "Internal error" 00:21:00.700 } 00:21:00.700 22:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:00.700 22:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:00.700 22:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:00.700 22:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:00.700 22:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2812244 00:21:00.700 22:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2812244 ']' 00:21:00.700 22:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2812244 00:21:00.700 22:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:00.700 22:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:00.700 22:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2812244 00:21:00.700 22:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:00.700 22:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:00.700 22:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2812244' 00:21:00.700 killing process with pid 2812244 00:21:00.700 22:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2812244 00:21:00.700 22:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2812244 00:21:00.961 22:18:26 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.tz3RWmovQo 00:21:00.961 22:18:26 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:00.961 22:18:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:00.961 22:18:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:00.961 22:18:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.961 22:18:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2812617 00:21:00.961 22:18:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2812617 00:21:00.961 22:18:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:00.961 22:18:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2812617 ']' 00:21:00.961 22:18:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.961 22:18:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.961 22:18:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.961 22:18:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.961 22:18:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.961 [2024-07-15 22:18:26.116210] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:21:00.961 [2024-07-15 22:18:26.116269] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.961 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.961 [2024-07-15 22:18:26.199114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.961 [2024-07-15 22:18:26.253982] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.961 [2024-07-15 22:18:26.254012] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.961 [2024-07-15 22:18:26.254017] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.961 [2024-07-15 22:18:26.254022] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.961 [2024-07-15 22:18:26.254026] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.961 [2024-07-15 22:18:26.254041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.902 22:18:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.902 22:18:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:01.902 22:18:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:01.902 22:18:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:01.902 22:18:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.902 22:18:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.902 22:18:26 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.tz3RWmovQo 00:21:01.902 22:18:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tz3RWmovQo 00:21:01.902 22:18:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:01.902 [2024-07-15 22:18:27.043819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.902 22:18:27 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:01.902 22:18:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:02.162 [2024-07-15 22:18:27.352572] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.162 [2024-07-15 22:18:27.352751] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.162 22:18:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:02.422 malloc0 00:21:02.422 22:18:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:02.422 22:18:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tz3RWmovQo 00:21:02.682 [2024-07-15 22:18:27.795573] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:02.682 22:18:27 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2812974 00:21:02.682 22:18:27 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:02.682 22:18:27 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:02.682 22:18:27 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2812974 /var/tmp/bdevperf.sock 00:21:02.682 22:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2812974 ']' 00:21:02.682 22:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.682 22:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.682 22:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.682 22:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.682 22:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.682 [2024-07-15 22:18:27.856249] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:21:02.682 [2024-07-15 22:18:27.856296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2812974 ] 00:21:02.682 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.682 [2024-07-15 22:18:27.905351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.682 [2024-07-15 22:18:27.957950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.621 22:18:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.621 22:18:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:03.621 22:18:28 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tz3RWmovQo 00:21:03.621 [2024-07-15 22:18:28.742786] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.621 [2024-07-15 22:18:28.742840] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:03.621 TLSTESTn1 00:21:03.621 22:18:28 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:03.882 22:18:29 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:03.882 "subsystems": [ 00:21:03.882 { 00:21:03.882 "subsystem": "keyring", 00:21:03.882 "config": [] 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "subsystem": "iobuf", 00:21:03.882 "config": [ 00:21:03.882 { 00:21:03.882 "method": "iobuf_set_options", 00:21:03.882 "params": { 00:21:03.882 "small_pool_count": 8192, 00:21:03.882 "large_pool_count": 1024, 00:21:03.882 "small_bufsize": 8192, 00:21:03.882 "large_bufsize": 135168 00:21:03.882 } 00:21:03.882 } 00:21:03.882 ] 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "subsystem": "sock", 00:21:03.882 "config": [ 00:21:03.882 { 00:21:03.882 "method": "sock_set_default_impl", 00:21:03.882 "params": { 00:21:03.882 "impl_name": "posix" 00:21:03.882 } 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "method": "sock_impl_set_options", 00:21:03.882 "params": { 00:21:03.882 "impl_name": "ssl", 00:21:03.882 "recv_buf_size": 4096, 00:21:03.882 "send_buf_size": 4096, 00:21:03.882 "enable_recv_pipe": true, 00:21:03.882 "enable_quickack": false, 00:21:03.882 "enable_placement_id": 0, 00:21:03.882 "enable_zerocopy_send_server": true, 00:21:03.882 "enable_zerocopy_send_client": false, 00:21:03.882 "zerocopy_threshold": 0, 00:21:03.882 "tls_version": 0, 00:21:03.882 "enable_ktls": false 00:21:03.882 } 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "method": "sock_impl_set_options", 00:21:03.882 "params": { 00:21:03.882 "impl_name": "posix", 00:21:03.882 "recv_buf_size": 2097152, 00:21:03.882 "send_buf_size": 2097152, 00:21:03.882 "enable_recv_pipe": true, 00:21:03.882 "enable_quickack": false, 00:21:03.882 "enable_placement_id": 0, 00:21:03.882 "enable_zerocopy_send_server": true, 00:21:03.882 "enable_zerocopy_send_client": false, 00:21:03.882 "zerocopy_threshold": 0, 00:21:03.882 "tls_version": 0, 00:21:03.882 "enable_ktls": false 00:21:03.882 } 00:21:03.882 } 00:21:03.882 ] 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "subsystem": "vmd", 00:21:03.882 "config": [] 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "subsystem": "accel", 00:21:03.882 "config": [ 00:21:03.882 { 00:21:03.882 "method": "accel_set_options", 00:21:03.882 "params": { 00:21:03.882 "small_cache_size": 128, 00:21:03.882 "large_cache_size": 16, 00:21:03.882 "task_count": 2048, 00:21:03.882 "sequence_count": 2048, 00:21:03.882 "buf_count": 2048 00:21:03.882 } 00:21:03.882 } 00:21:03.882 ] 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "subsystem": "bdev", 00:21:03.882 "config": [ 00:21:03.882 { 00:21:03.882 "method": "bdev_set_options", 00:21:03.882 "params": { 00:21:03.882 "bdev_io_pool_size": 65535, 00:21:03.882 "bdev_io_cache_size": 256, 00:21:03.882 "bdev_auto_examine": true, 00:21:03.882 "iobuf_small_cache_size": 128, 00:21:03.882 "iobuf_large_cache_size": 16 00:21:03.882 } 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "method": "bdev_raid_set_options", 00:21:03.882 "params": { 00:21:03.882 "process_window_size_kb": 1024 00:21:03.882 } 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "method": "bdev_iscsi_set_options", 00:21:03.882 "params": { 00:21:03.882 "timeout_sec": 30 00:21:03.882 } 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "method": "bdev_nvme_set_options", 00:21:03.882 "params": { 00:21:03.882 "action_on_timeout": "none", 00:21:03.882 "timeout_us": 0, 00:21:03.882 "timeout_admin_us": 0, 00:21:03.882 "keep_alive_timeout_ms": 10000, 00:21:03.882 "arbitration_burst": 0, 00:21:03.882 "low_priority_weight": 0, 00:21:03.882 "medium_priority_weight": 0, 00:21:03.882 "high_priority_weight": 0, 00:21:03.882 "nvme_adminq_poll_period_us": 10000, 00:21:03.882 "nvme_ioq_poll_period_us": 0, 00:21:03.882 "io_queue_requests": 0, 00:21:03.882 "delay_cmd_submit": true, 00:21:03.882 "transport_retry_count": 4, 00:21:03.882 "bdev_retry_count": 3, 00:21:03.882 "transport_ack_timeout": 0, 00:21:03.882 "ctrlr_loss_timeout_sec": 0, 00:21:03.882 "reconnect_delay_sec": 0, 00:21:03.882 "fast_io_fail_timeout_sec": 0, 00:21:03.882 "disable_auto_failback": false, 00:21:03.882 "generate_uuids": false, 00:21:03.882 "transport_tos": 0, 00:21:03.882 "nvme_error_stat": false, 00:21:03.882 "rdma_srq_size": 0, 00:21:03.882 "io_path_stat": false, 00:21:03.882 "allow_accel_sequence": false, 00:21:03.882 "rdma_max_cq_size": 0, 00:21:03.882 "rdma_cm_event_timeout_ms": 0, 00:21:03.882 "dhchap_digests": [ 00:21:03.882 "sha256", 00:21:03.882 "sha384", 00:21:03.882 "sha512" 00:21:03.882 ], 00:21:03.882 "dhchap_dhgroups": [ 00:21:03.882 "null", 00:21:03.882 "ffdhe2048", 00:21:03.882 "ffdhe3072", 00:21:03.882 "ffdhe4096", 00:21:03.882 "ffdhe6144", 00:21:03.882 "ffdhe8192" 00:21:03.882 ] 00:21:03.882 } 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "method": "bdev_nvme_set_hotplug", 00:21:03.882 "params": { 00:21:03.882 "period_us": 100000, 00:21:03.882 "enable": false 00:21:03.882 } 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "method": "bdev_malloc_create", 00:21:03.882 "params": { 00:21:03.882 "name": "malloc0", 00:21:03.882 "num_blocks": 8192, 00:21:03.882 "block_size": 4096, 00:21:03.882 "physical_block_size": 4096, 00:21:03.882 "uuid": "1e3c6b26-1945-426e-a1e3-6900b633ff20", 00:21:03.882 "optimal_io_boundary": 0 00:21:03.882 } 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "method": "bdev_wait_for_examine" 00:21:03.882 } 00:21:03.882 ] 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "subsystem": "nbd", 00:21:03.882 "config": [] 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "subsystem": "scheduler", 00:21:03.882 "config": [ 00:21:03.882 { 00:21:03.882 "method": "framework_set_scheduler", 00:21:03.882 "params": { 00:21:03.882 "name": "static" 00:21:03.882 } 00:21:03.882 } 00:21:03.882 ] 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "subsystem": "nvmf", 00:21:03.882 "config": [ 00:21:03.882 { 00:21:03.882 "method": "nvmf_set_config", 00:21:03.882 "params": { 00:21:03.882 "discovery_filter": "match_any", 00:21:03.882 "admin_cmd_passthru": { 00:21:03.882 "identify_ctrlr": false 00:21:03.882 } 00:21:03.882 } 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "method": "nvmf_set_max_subsystems", 00:21:03.882 "params": { 00:21:03.882 "max_subsystems": 1024 00:21:03.882 } 00:21:03.882 }, 00:21:03.882 { 00:21:03.882 "method": "nvmf_set_crdt", 00:21:03.882 "params": { 00:21:03.882 "crdt1": 0, 00:21:03.882 "crdt2": 0, 00:21:03.882 "crdt3": 0 00:21:03.883 } 00:21:03.883 }, 00:21:03.883 { 00:21:03.883 "method": "nvmf_create_transport", 00:21:03.883 "params": { 00:21:03.883 "trtype": "TCP", 00:21:03.883 "max_queue_depth": 128, 00:21:03.883 "max_io_qpairs_per_ctrlr": 127, 00:21:03.883 "in_capsule_data_size": 4096, 00:21:03.883 "max_io_size": 131072, 00:21:03.883 "io_unit_size": 131072, 00:21:03.883 "max_aq_depth": 128, 00:21:03.883 "num_shared_buffers": 511, 00:21:03.883 "buf_cache_size": 4294967295, 00:21:03.883 "dif_insert_or_strip": false, 00:21:03.883 "zcopy": false, 00:21:03.883 "c2h_success": false, 00:21:03.883 "sock_priority": 0, 00:21:03.883 "abort_timeout_sec": 1, 00:21:03.883 "ack_timeout": 0, 00:21:03.883 "data_wr_pool_size": 0 00:21:03.883 } 00:21:03.883 }, 00:21:03.883 { 00:21:03.883 "method": "nvmf_create_subsystem", 00:21:03.883 "params": { 00:21:03.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.883 "allow_any_host": false, 00:21:03.883 "serial_number": "SPDK00000000000001", 00:21:03.883 "model_number": "SPDK bdev Controller", 00:21:03.883 "max_namespaces": 10, 00:21:03.883 "min_cntlid": 1, 00:21:03.883 "max_cntlid": 65519, 00:21:03.883 "ana_reporting": false 00:21:03.883 } 00:21:03.883 }, 00:21:03.883 { 00:21:03.883 "method": "nvmf_subsystem_add_host", 00:21:03.883 "params": { 00:21:03.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.883 "host": "nqn.2016-06.io.spdk:host1", 00:21:03.883 "psk": "/tmp/tmp.tz3RWmovQo" 00:21:03.883 } 00:21:03.883 }, 00:21:03.883 { 00:21:03.883 "method": "nvmf_subsystem_add_ns", 00:21:03.883 "params": { 00:21:03.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.883 "namespace": { 00:21:03.883 "nsid": 1, 00:21:03.883 "bdev_name": "malloc0", 00:21:03.883 "nguid": "1E3C6B261945426EA1E36900B633FF20", 00:21:03.883 "uuid": "1e3c6b26-1945-426e-a1e3-6900b633ff20", 00:21:03.883 "no_auto_visible": false 00:21:03.883 } 00:21:03.883 } 00:21:03.883 }, 00:21:03.883 { 00:21:03.883 "method": "nvmf_subsystem_add_listener", 00:21:03.883 "params": { 00:21:03.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.883 "listen_address": { 00:21:03.883 "trtype": "TCP", 00:21:03.883 "adrfam": "IPv4", 00:21:03.883 "traddr": "10.0.0.2", 00:21:03.883 "trsvcid": "4420" 00:21:03.883 }, 00:21:03.883 "secure_channel": true 00:21:03.883 } 00:21:03.883 } 00:21:03.883 ] 00:21:03.883 } 00:21:03.883 ] 00:21:03.883 }' 00:21:03.883 22:18:29 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:04.199 22:18:29 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:04.199 "subsystems": [ 00:21:04.199 { 00:21:04.199 "subsystem": "keyring", 00:21:04.199 "config": [] 00:21:04.199 }, 00:21:04.199 { 00:21:04.199 "subsystem": "iobuf", 00:21:04.199 "config": [ 00:21:04.199 { 00:21:04.199 "method": "iobuf_set_options", 00:21:04.199 "params": { 00:21:04.199 "small_pool_count": 8192, 00:21:04.199 "large_pool_count": 1024, 00:21:04.199 "small_bufsize": 8192, 00:21:04.199 "large_bufsize": 135168 00:21:04.199 } 00:21:04.199 } 00:21:04.199 ] 00:21:04.199 }, 00:21:04.199 { 00:21:04.199 "subsystem": "sock", 00:21:04.199 "config": [ 00:21:04.199 { 00:21:04.199 "method": "sock_set_default_impl", 00:21:04.199 "params": { 00:21:04.199 "impl_name": "posix" 00:21:04.199 } 00:21:04.199 }, 00:21:04.199 { 00:21:04.199 "method": "sock_impl_set_options", 00:21:04.199 "params": { 00:21:04.199 "impl_name": "ssl", 00:21:04.199 "recv_buf_size": 4096, 00:21:04.199 "send_buf_size": 4096, 00:21:04.199 "enable_recv_pipe": true, 00:21:04.199 "enable_quickack": false, 00:21:04.199 "enable_placement_id": 0, 00:21:04.199 "enable_zerocopy_send_server": true, 00:21:04.199 "enable_zerocopy_send_client": false, 00:21:04.199 "zerocopy_threshold": 0, 00:21:04.199 "tls_version": 0, 00:21:04.199 "enable_ktls": false 00:21:04.199 } 00:21:04.199 }, 00:21:04.199 { 00:21:04.199 "method": "sock_impl_set_options", 00:21:04.199 "params": { 00:21:04.199 "impl_name": "posix", 00:21:04.199 "recv_buf_size": 2097152, 00:21:04.199 "send_buf_size": 2097152, 00:21:04.199 "enable_recv_pipe": true, 00:21:04.199 "enable_quickack": false, 00:21:04.199 "enable_placement_id": 0, 00:21:04.199 "enable_zerocopy_send_server": true, 00:21:04.199 "enable_zerocopy_send_client": false, 00:21:04.199 "zerocopy_threshold": 0, 00:21:04.199 "tls_version": 0, 00:21:04.199 "enable_ktls": false 00:21:04.199 } 00:21:04.199 } 00:21:04.199 ] 00:21:04.199 }, 00:21:04.199 { 00:21:04.199 "subsystem": "vmd", 00:21:04.199 "config": [] 00:21:04.199 }, 00:21:04.199 { 00:21:04.199 "subsystem": "accel", 00:21:04.199 "config": [ 00:21:04.199 { 00:21:04.199 "method": "accel_set_options", 00:21:04.199 "params": { 00:21:04.199 "small_cache_size": 128, 00:21:04.199 "large_cache_size": 16, 00:21:04.199 "task_count": 2048, 00:21:04.199 "sequence_count": 2048, 00:21:04.199 "buf_count": 2048 00:21:04.199 } 00:21:04.199 } 00:21:04.199 ] 00:21:04.199 }, 00:21:04.199 { 00:21:04.199 "subsystem": "bdev", 00:21:04.199 "config": [ 00:21:04.199 { 00:21:04.199 "method": "bdev_set_options", 00:21:04.199 "params": { 00:21:04.199 "bdev_io_pool_size": 65535, 00:21:04.199 "bdev_io_cache_size": 256, 00:21:04.199 "bdev_auto_examine": true, 00:21:04.199 "iobuf_small_cache_size": 128, 00:21:04.199 "iobuf_large_cache_size": 16 00:21:04.199 } 00:21:04.199 }, 00:21:04.199 { 00:21:04.199 "method": "bdev_raid_set_options", 00:21:04.199 "params": { 00:21:04.199 "process_window_size_kb": 1024 00:21:04.199 } 00:21:04.199 }, 00:21:04.199 { 00:21:04.199 "method": "bdev_iscsi_set_options", 00:21:04.199 "params": { 00:21:04.199 "timeout_sec": 30 00:21:04.199 } 00:21:04.199 }, 00:21:04.199 { 00:21:04.199 "method": "bdev_nvme_set_options", 00:21:04.199 "params": { 00:21:04.199 "action_on_timeout": "none", 00:21:04.199 "timeout_us": 0, 00:21:04.199 "timeout_admin_us": 0, 00:21:04.199 "keep_alive_timeout_ms": 10000, 00:21:04.199 "arbitration_burst": 0, 00:21:04.199 "low_priority_weight": 0, 00:21:04.199 "medium_priority_weight": 0, 00:21:04.199 "high_priority_weight": 0, 00:21:04.199 "nvme_adminq_poll_period_us": 10000, 00:21:04.199 "nvme_ioq_poll_period_us": 0, 00:21:04.199 "io_queue_requests": 512, 00:21:04.199 "delay_cmd_submit": true, 00:21:04.199 "transport_retry_count": 4, 00:21:04.199 "bdev_retry_count": 3, 00:21:04.199 "transport_ack_timeout": 0, 00:21:04.199 "ctrlr_loss_timeout_sec": 0, 00:21:04.199 "reconnect_delay_sec": 0, 00:21:04.199 "fast_io_fail_timeout_sec": 0, 00:21:04.199 "disable_auto_failback": false, 00:21:04.199 "generate_uuids": false, 00:21:04.199 "transport_tos": 0, 00:21:04.199 "nvme_error_stat": false, 00:21:04.199 "rdma_srq_size": 0, 00:21:04.199 "io_path_stat": false, 00:21:04.199 "allow_accel_sequence": false, 00:21:04.199 "rdma_max_cq_size": 0, 00:21:04.199 "rdma_cm_event_timeout_ms": 0, 00:21:04.199 "dhchap_digests": [ 00:21:04.199 "sha256", 00:21:04.199 "sha384", 00:21:04.199 "sha512" 00:21:04.199 ], 00:21:04.199 "dhchap_dhgroups": [ 00:21:04.199 "null", 00:21:04.199 "ffdhe2048", 00:21:04.199 "ffdhe3072", 00:21:04.199 "ffdhe4096", 00:21:04.199 "ffdhe6144", 00:21:04.199 "ffdhe8192" 00:21:04.199 ] 00:21:04.199 } 00:21:04.199 }, 00:21:04.199 { 00:21:04.199 "method": "bdev_nvme_attach_controller", 00:21:04.199 "params": { 00:21:04.199 "name": "TLSTEST", 00:21:04.199 "trtype": "TCP", 00:21:04.199 "adrfam": "IPv4", 00:21:04.199 "traddr": "10.0.0.2", 00:21:04.199 "trsvcid": "4420", 00:21:04.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.199 "prchk_reftag": false, 00:21:04.199 "prchk_guard": false, 00:21:04.199 "ctrlr_loss_timeout_sec": 0, 00:21:04.199 "reconnect_delay_sec": 0, 00:21:04.199 "fast_io_fail_timeout_sec": 0, 00:21:04.199 "psk": "/tmp/tmp.tz3RWmovQo", 00:21:04.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.199 "hdgst": false, 00:21:04.199 "ddgst": false 00:21:04.199 } 00:21:04.199 }, 00:21:04.199 { 00:21:04.199 "method": "bdev_nvme_set_hotplug", 00:21:04.199 "params": { 00:21:04.199 "period_us": 100000, 00:21:04.200 "enable": false 00:21:04.200 } 00:21:04.200 }, 00:21:04.200 { 00:21:04.200 "method": "bdev_wait_for_examine" 00:21:04.200 } 00:21:04.200 ] 00:21:04.200 }, 00:21:04.200 { 00:21:04.200 "subsystem": "nbd", 00:21:04.200 "config": [] 00:21:04.200 } 00:21:04.200 ] 00:21:04.200 }' 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2812974 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2812974 ']' 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2812974 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2812974 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2812974' 00:21:04.200 killing process with pid 2812974 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2812974 00:21:04.200 Received shutdown signal, test time was about 10.000000 seconds 00:21:04.200 00:21:04.200 Latency(us) 00:21:04.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.200 =================================================================================================================== 00:21:04.200 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:04.200 [2024-07-15 22:18:29.375298] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2812974 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2812617 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2812617 ']' 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2812617 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.200 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2812617 00:21:04.461 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:04.461 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:04.461 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2812617' 00:21:04.461 killing process with pid 2812617 00:21:04.461 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2812617 00:21:04.461 [2024-07-15 22:18:29.542840] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:04.461 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2812617 00:21:04.461 22:18:29 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:04.461 22:18:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:04.461 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:04.461 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.461 22:18:29 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:04.461 "subsystems": [ 00:21:04.461 { 00:21:04.461 "subsystem": "keyring", 00:21:04.461 "config": [] 00:21:04.461 }, 00:21:04.461 { 00:21:04.461 "subsystem": "iobuf", 00:21:04.461 "config": [ 00:21:04.461 { 00:21:04.461 "method": "iobuf_set_options", 00:21:04.461 "params": { 00:21:04.461 "small_pool_count": 8192, 00:21:04.461 "large_pool_count": 1024, 00:21:04.461 "small_bufsize": 8192, 00:21:04.461 "large_bufsize": 135168 00:21:04.461 } 00:21:04.461 } 00:21:04.461 ] 00:21:04.461 }, 00:21:04.461 { 00:21:04.461 "subsystem": "sock", 00:21:04.461 "config": [ 00:21:04.461 { 00:21:04.461 "method": "sock_set_default_impl", 00:21:04.461 "params": { 00:21:04.461 "impl_name": "posix" 00:21:04.461 } 00:21:04.461 }, 00:21:04.461 { 00:21:04.461 "method": "sock_impl_set_options", 00:21:04.461 "params": { 00:21:04.461 "impl_name": "ssl", 00:21:04.461 "recv_buf_size": 4096, 00:21:04.461 "send_buf_size": 4096, 00:21:04.461 "enable_recv_pipe": true, 00:21:04.461 "enable_quickack": false, 00:21:04.461 "enable_placement_id": 0, 00:21:04.461 "enable_zerocopy_send_server": true, 00:21:04.461 "enable_zerocopy_send_client": false, 00:21:04.461 "zerocopy_threshold": 0, 00:21:04.461 "tls_version": 0, 00:21:04.461 "enable_ktls": false 00:21:04.461 } 00:21:04.461 }, 00:21:04.461 { 00:21:04.461 "method": "sock_impl_set_options", 00:21:04.461 "params": { 00:21:04.461 "impl_name": "posix", 00:21:04.461 "recv_buf_size": 2097152, 00:21:04.461 "send_buf_size": 2097152, 00:21:04.461 "enable_recv_pipe": true, 00:21:04.461 "enable_quickack": false, 00:21:04.461 "enable_placement_id": 0, 00:21:04.461 "enable_zerocopy_send_server": true, 00:21:04.461 "enable_zerocopy_send_client": false, 00:21:04.461 "zerocopy_threshold": 0, 00:21:04.461 "tls_version": 0, 00:21:04.461 "enable_ktls": false 00:21:04.461 } 00:21:04.461 } 00:21:04.461 ] 00:21:04.461 }, 00:21:04.461 { 00:21:04.461 "subsystem": "vmd", 00:21:04.461 "config": [] 00:21:04.461 }, 00:21:04.461 { 00:21:04.461 "subsystem": "accel", 00:21:04.461 "config": [ 00:21:04.461 { 00:21:04.461 "method": "accel_set_options", 00:21:04.461 "params": { 00:21:04.461 "small_cache_size": 128, 00:21:04.461 "large_cache_size": 16, 00:21:04.461 "task_count": 2048, 00:21:04.461 "sequence_count": 2048, 00:21:04.461 "buf_count": 2048 00:21:04.461 } 00:21:04.461 } 00:21:04.461 ] 00:21:04.461 }, 00:21:04.461 { 00:21:04.461 "subsystem": "bdev", 00:21:04.461 "config": [ 00:21:04.461 { 00:21:04.461 "method": "bdev_set_options", 00:21:04.461 "params": { 00:21:04.461 "bdev_io_pool_size": 65535, 00:21:04.461 "bdev_io_cache_size": 256, 00:21:04.461 "bdev_auto_examine": true, 00:21:04.461 "iobuf_small_cache_size": 128, 00:21:04.461 "iobuf_large_cache_size": 16 00:21:04.461 } 00:21:04.461 }, 00:21:04.461 { 00:21:04.461 "method": "bdev_raid_set_options", 00:21:04.461 "params": { 00:21:04.461 "process_window_size_kb": 1024 00:21:04.461 } 00:21:04.461 }, 00:21:04.461 { 00:21:04.461 "method": "bdev_iscsi_set_options", 00:21:04.461 "params": { 00:21:04.461 "timeout_sec": 30 00:21:04.461 } 00:21:04.461 }, 00:21:04.461 { 00:21:04.461 "method": "bdev_nvme_set_options", 00:21:04.461 "params": { 00:21:04.461 "action_on_timeout": "none", 00:21:04.461 "timeout_us": 0, 00:21:04.461 "timeout_admin_us": 0, 00:21:04.461 "keep_alive_timeout_ms": 10000, 00:21:04.461 "arbitration_burst": 0, 00:21:04.461 "low_priority_weight": 0, 00:21:04.461 "medium_priority_weight": 0, 00:21:04.461 "high_priority_weight": 0, 00:21:04.461 "nvme_adminq_poll_period_us": 10000, 00:21:04.461 "nvme_ioq_poll_period_us": 0, 00:21:04.461 "io_queue_requests": 0, 00:21:04.461 "delay_cmd_submit": true, 00:21:04.461 "transport_retry_count": 4, 00:21:04.461 "bdev_retry_count": 3, 00:21:04.461 "transport_ack_timeout": 0, 00:21:04.461 "ctrlr_loss_timeout_sec": 0, 00:21:04.461 "reconnect_delay_sec": 0, 00:21:04.461 "fast_io_fail_timeout_sec": 0, 00:21:04.461 "disable_auto_failback": false, 00:21:04.461 "generate_uuids": false, 00:21:04.461 "transport_tos": 0, 00:21:04.461 "nvme_error_stat": false, 00:21:04.461 "rdma_srq_size": 0, 00:21:04.461 "io_path_stat": false, 00:21:04.461 "allow_accel_sequence": false, 00:21:04.461 "rdma_max_cq_size": 0, 00:21:04.461 "rdma_cm_event_timeout_ms": 0, 00:21:04.461 "dhchap_digests": [ 00:21:04.461 "sha256", 00:21:04.461 "sha384", 00:21:04.461 "sha512" 00:21:04.461 ], 00:21:04.462 "dhchap_dhgroups": [ 00:21:04.462 "null", 00:21:04.462 "ffdhe2048", 00:21:04.462 "ffdhe3072", 00:21:04.462 "ffdhe4096", 00:21:04.462 "ffdhe6144", 00:21:04.462 "ffdhe8192" 00:21:04.462 ] 00:21:04.462 } 00:21:04.462 }, 00:21:04.462 { 00:21:04.462 "method": "bdev_nvme_set_hotplug", 00:21:04.462 "params": { 00:21:04.462 "period_us": 100000, 00:21:04.462 "enable": false 00:21:04.462 } 00:21:04.462 }, 00:21:04.462 { 00:21:04.462 "method": "bdev_malloc_create", 00:21:04.462 "params": { 00:21:04.462 "name": "malloc0", 00:21:04.462 "num_blocks": 8192, 00:21:04.462 "block_size": 4096, 00:21:04.462 "physical_block_size": 4096, 00:21:04.462 "uuid": "1e3c6b26-1945-426e-a1e3-6900b633ff20", 00:21:04.462 "optimal_io_boundary": 0 00:21:04.462 } 00:21:04.462 }, 00:21:04.462 { 00:21:04.462 "method": "bdev_wait_for_examine" 00:21:04.462 } 00:21:04.462 ] 00:21:04.462 }, 00:21:04.462 { 00:21:04.462 "subsystem": "nbd", 00:21:04.462 "config": [] 00:21:04.462 }, 00:21:04.462 { 00:21:04.462 "subsystem": "scheduler", 00:21:04.462 "config": [ 00:21:04.462 { 00:21:04.462 "method": "framework_set_scheduler", 00:21:04.462 "params": { 00:21:04.462 "name": "static" 00:21:04.462 } 00:21:04.462 } 00:21:04.462 ] 00:21:04.462 }, 00:21:04.462 { 00:21:04.462 "subsystem": "nvmf", 00:21:04.462 "config": [ 00:21:04.462 { 00:21:04.462 "method": "nvmf_set_config", 00:21:04.462 "params": { 00:21:04.462 "discovery_filter": "match_any", 00:21:04.462 "admin_cmd_passthru": { 00:21:04.462 "identify_ctrlr": false 00:21:04.462 } 00:21:04.462 } 00:21:04.462 }, 00:21:04.462 { 00:21:04.462 "method": "nvmf_set_max_subsystems", 00:21:04.462 "params": { 00:21:04.462 "max_subsystems": 1024 00:21:04.462 } 00:21:04.462 }, 00:21:04.462 { 00:21:04.462 "method": "nvmf_set_crdt", 00:21:04.462 "params": { 00:21:04.462 "crdt1": 0, 00:21:04.462 "crdt2": 0, 00:21:04.462 "crdt3": 0 00:21:04.462 } 00:21:04.462 }, 00:21:04.462 { 00:21:04.462 "method": "nvmf_create_transport", 00:21:04.462 "params": { 00:21:04.462 "trtype": "TCP", 00:21:04.462 "max_queue_depth": 128, 00:21:04.462 "max_io_qpairs_per_ctrlr": 127, 00:21:04.462 "in_capsule_data_size": 4096, 00:21:04.462 "max_io_size": 131072, 00:21:04.462 "io_unit_size": 131072, 00:21:04.462 "max_aq_depth": 128, 00:21:04.462 "num_shared_buffers": 511, 00:21:04.462 "buf_cache_size": 4294967295, 00:21:04.462 "dif_insert_or_strip": false, 00:21:04.462 "zcopy": false, 00:21:04.462 "c2h_success": false, 00:21:04.462 "sock_priority": 0, 00:21:04.462 "abort_timeout_sec": 1, 00:21:04.462 "ack_timeout": 0, 00:21:04.462 "data_wr_pool_size": 0 00:21:04.462 } 00:21:04.462 }, 00:21:04.462 { 00:21:04.462 "method": "nvmf_create_subsystem", 00:21:04.462 "params": { 00:21:04.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.462 "allow_any_host": false, 00:21:04.462 "serial_number": "SPDK00000000000001", 00:21:04.462 "model_number": "SPDK bdev Controller", 00:21:04.462 "max_namespaces": 10, 00:21:04.462 "min_cntlid": 1, 00:21:04.462 "max_cntlid": 65519, 00:21:04.462 "ana_reporting": false 00:21:04.462 } 00:21:04.462 }, 00:21:04.462 { 00:21:04.462 "method": "nvmf_subsystem_add_host", 00:21:04.462 "params": { 00:21:04.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.462 "host": "nqn.2016-06.io.spdk:host1", 00:21:04.462 "psk": "/tmp/tmp.tz3RWmovQo" 00:21:04.462 } 00:21:04.462 }, 00:21:04.462 { 00:21:04.462 "method": "nvmf_subsystem_add_ns", 00:21:04.462 "params": { 00:21:04.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.462 "namespace": { 00:21:04.462 "nsid": 1, 00:21:04.462 "bdev_name": "malloc0", 00:21:04.462 "nguid": "1E3C6B261945426EA1E36900B633FF20", 00:21:04.462 "uuid": "1e3c6b26-1945-426e-a1e3-6900b633ff20", 00:21:04.462 "no_auto_visible": false 00:21:04.462 } 00:21:04.462 } 00:21:04.462 }, 00:21:04.462 { 00:21:04.462 "method": "nvmf_subsystem_add_listener", 00:21:04.462 "params": { 00:21:04.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.462 "listen_address": { 00:21:04.462 "trtype": "TCP", 00:21:04.462 "adrfam": "IPv4", 00:21:04.462 "traddr": "10.0.0.2", 00:21:04.462 "trsvcid": "4420" 00:21:04.462 }, 00:21:04.462 "secure_channel": true 00:21:04.462 } 00:21:04.462 } 00:21:04.462 ] 00:21:04.462 } 00:21:04.462 ] 00:21:04.462 }' 00:21:04.462 22:18:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2813350 00:21:04.462 22:18:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2813350 00:21:04.462 22:18:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:04.462 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2813350 ']' 00:21:04.462 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.462 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.462 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.462 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.462 22:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.462 [2024-07-15 22:18:29.722302] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:21:04.462 [2024-07-15 22:18:29.722357] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.462 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.723 [2024-07-15 22:18:29.804669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.723 [2024-07-15 22:18:29.857606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.723 [2024-07-15 22:18:29.857637] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.723 [2024-07-15 22:18:29.857642] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.723 [2024-07-15 22:18:29.857646] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.723 [2024-07-15 22:18:29.857650] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.723 [2024-07-15 22:18:29.857691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.723 [2024-07-15 22:18:30.042701] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.983 [2024-07-15 22:18:30.058674] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:04.983 [2024-07-15 22:18:30.074723] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:04.983 [2024-07-15 22:18:30.086274] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.242 22:18:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.242 22:18:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:05.242 22:18:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:05.242 22:18:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:05.242 22:18:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.242 22:18:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.242 22:18:30 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2813681 00:21:05.242 22:18:30 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2813681 /var/tmp/bdevperf.sock 00:21:05.242 22:18:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2813681 ']' 00:21:05.242 22:18:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.242 22:18:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.242 22:18:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.242 22:18:30 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:05.242 22:18:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.242 22:18:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.242 22:18:30 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:05.242 "subsystems": [ 00:21:05.242 { 00:21:05.242 "subsystem": "keyring", 00:21:05.242 "config": [] 00:21:05.242 }, 00:21:05.242 { 00:21:05.242 "subsystem": "iobuf", 00:21:05.242 "config": [ 00:21:05.242 { 00:21:05.242 "method": "iobuf_set_options", 00:21:05.242 "params": { 00:21:05.242 "small_pool_count": 8192, 00:21:05.242 "large_pool_count": 1024, 00:21:05.242 "small_bufsize": 8192, 00:21:05.242 "large_bufsize": 135168 00:21:05.242 } 00:21:05.242 } 00:21:05.242 ] 00:21:05.242 }, 00:21:05.242 { 00:21:05.242 "subsystem": "sock", 00:21:05.242 "config": [ 00:21:05.242 { 00:21:05.242 "method": "sock_set_default_impl", 00:21:05.242 "params": { 00:21:05.242 "impl_name": "posix" 00:21:05.242 } 00:21:05.242 }, 00:21:05.242 { 00:21:05.242 "method": "sock_impl_set_options", 00:21:05.242 "params": { 00:21:05.242 "impl_name": "ssl", 00:21:05.242 "recv_buf_size": 4096, 00:21:05.242 "send_buf_size": 4096, 00:21:05.242 "enable_recv_pipe": true, 00:21:05.242 "enable_quickack": false, 00:21:05.242 "enable_placement_id": 0, 00:21:05.242 "enable_zerocopy_send_server": true, 00:21:05.242 "enable_zerocopy_send_client": false, 00:21:05.242 "zerocopy_threshold": 0, 00:21:05.242 "tls_version": 0, 00:21:05.242 "enable_ktls": false 00:21:05.242 } 00:21:05.242 }, 00:21:05.242 { 00:21:05.242 "method": "sock_impl_set_options", 00:21:05.242 "params": { 00:21:05.242 "impl_name": "posix", 00:21:05.242 "recv_buf_size": 2097152, 00:21:05.242 "send_buf_size": 2097152, 00:21:05.242 "enable_recv_pipe": true, 00:21:05.242 "enable_quickack": false, 00:21:05.242 "enable_placement_id": 0, 00:21:05.242 "enable_zerocopy_send_server": true, 00:21:05.242 "enable_zerocopy_send_client": false, 00:21:05.242 "zerocopy_threshold": 0, 00:21:05.242 "tls_version": 0, 00:21:05.242 "enable_ktls": false 00:21:05.242 } 00:21:05.242 } 00:21:05.242 ] 00:21:05.242 }, 00:21:05.242 { 00:21:05.242 "subsystem": "vmd", 00:21:05.242 "config": [] 00:21:05.242 }, 00:21:05.242 { 00:21:05.242 "subsystem": "accel", 00:21:05.242 "config": [ 00:21:05.242 { 00:21:05.242 "method": "accel_set_options", 00:21:05.242 "params": { 00:21:05.242 "small_cache_size": 128, 00:21:05.242 "large_cache_size": 16, 00:21:05.242 "task_count": 2048, 00:21:05.242 "sequence_count": 2048, 00:21:05.242 "buf_count": 2048 00:21:05.242 } 00:21:05.242 } 00:21:05.242 ] 00:21:05.242 }, 00:21:05.242 { 00:21:05.242 "subsystem": "bdev", 00:21:05.242 "config": [ 00:21:05.242 { 00:21:05.242 "method": "bdev_set_options", 00:21:05.242 "params": { 00:21:05.242 "bdev_io_pool_size": 65535, 00:21:05.242 "bdev_io_cache_size": 256, 00:21:05.242 "bdev_auto_examine": true, 00:21:05.242 "iobuf_small_cache_size": 128, 00:21:05.242 "iobuf_large_cache_size": 16 00:21:05.242 } 00:21:05.242 }, 00:21:05.242 { 00:21:05.242 "method": "bdev_raid_set_options", 00:21:05.242 "params": { 00:21:05.242 "process_window_size_kb": 1024 00:21:05.242 } 00:21:05.242 }, 00:21:05.242 { 00:21:05.242 "method": "bdev_iscsi_set_options", 00:21:05.242 "params": { 00:21:05.242 "timeout_sec": 30 00:21:05.242 } 00:21:05.242 }, 00:21:05.242 { 00:21:05.242 "method": "bdev_nvme_set_options", 00:21:05.242 "params": { 00:21:05.242 "action_on_timeout": "none", 00:21:05.242 "timeout_us": 0, 00:21:05.242 "timeout_admin_us": 0, 00:21:05.242 "keep_alive_timeout_ms": 10000, 00:21:05.242 "arbitration_burst": 0, 00:21:05.242 "low_priority_weight": 0, 00:21:05.242 "medium_priority_weight": 0, 00:21:05.242 "high_priority_weight": 0, 00:21:05.242 "nvme_adminq_poll_period_us": 10000, 00:21:05.242 "nvme_ioq_poll_period_us": 0, 00:21:05.242 "io_queue_requests": 512, 00:21:05.242 "delay_cmd_submit": true, 00:21:05.242 "transport_retry_count": 4, 00:21:05.242 "bdev_retry_count": 3, 00:21:05.242 "transport_ack_timeout": 0, 00:21:05.242 "ctrlr_loss_timeout_sec": 0, 00:21:05.242 "reconnect_delay_sec": 0, 00:21:05.242 "fast_io_fail_timeout_sec": 0, 00:21:05.242 "disable_auto_failback": false, 00:21:05.242 "generate_uuids": false, 00:21:05.242 "transport_tos": 0, 00:21:05.242 "nvme_error_stat": false, 00:21:05.242 "rdma_srq_size": 0, 00:21:05.242 "io_path_stat": false, 00:21:05.242 "allow_accel_sequence": false, 00:21:05.242 "rdma_max_cq_size": 0, 00:21:05.242 "rdma_cm_event_timeout_ms": 0, 00:21:05.242 "dhchap_digests": [ 00:21:05.242 "sha256", 00:21:05.242 "sha384", 00:21:05.242 "sha512" 00:21:05.242 ], 00:21:05.242 "dhchap_dhgroups": [ 00:21:05.242 "null", 00:21:05.242 "ffdhe2048", 00:21:05.242 "ffdhe3072", 00:21:05.242 "ffdhe4096", 00:21:05.242 "ffdhe6144", 00:21:05.242 "ffdhe8192" 00:21:05.242 ] 00:21:05.242 } 00:21:05.242 }, 00:21:05.242 { 00:21:05.242 "method": "bdev_nvme_attach_controller", 00:21:05.242 "params": { 00:21:05.242 "name": "TLSTEST", 00:21:05.242 "trtype": "TCP", 00:21:05.242 "adrfam": "IPv4", 00:21:05.242 "traddr": "10.0.0.2", 00:21:05.242 "trsvcid": "4420", 00:21:05.242 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.242 "prchk_reftag": false, 00:21:05.242 "prchk_guard": false, 00:21:05.242 "ctrlr_loss_timeout_sec": 0, 00:21:05.242 "reconnect_delay_sec": 0, 00:21:05.242 "fast_io_fail_timeout_sec": 0, 00:21:05.242 "psk": "/tmp/tmp.tz3RWmovQo", 00:21:05.242 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.242 "hdgst": false, 00:21:05.242 "ddgst": false 00:21:05.242 } 00:21:05.242 }, 00:21:05.242 { 00:21:05.242 "method": "bdev_nvme_set_hotplug", 00:21:05.242 "params": { 00:21:05.242 "period_us": 100000, 00:21:05.242 "enable": false 00:21:05.242 } 00:21:05.242 }, 00:21:05.242 { 00:21:05.242 "method": "bdev_wait_for_examine" 00:21:05.242 } 00:21:05.242 ] 00:21:05.242 }, 00:21:05.242 { 00:21:05.242 "subsystem": "nbd", 00:21:05.242 "config": [] 00:21:05.242 } 00:21:05.242 ] 00:21:05.242 }' 00:21:05.502 [2024-07-15 22:18:30.570498] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:21:05.502 [2024-07-15 22:18:30.570554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2813681 ] 00:21:05.502 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.502 [2024-07-15 22:18:30.620066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.502 [2024-07-15 22:18:30.672407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.502 [2024-07-15 22:18:30.796855] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.502 [2024-07-15 22:18:30.796921] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:06.072 22:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.072 22:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:06.072 22:18:31 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:06.072 Running I/O for 10 seconds... 00:21:18.297 00:21:18.297 Latency(us) 00:21:18.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.297 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:18.297 Verification LBA range: start 0x0 length 0x2000 00:21:18.297 TLSTESTn1 : 10.07 2384.51 9.31 0.00 0.00 53493.95 6225.92 143305.39 00:21:18.297 =================================================================================================================== 00:21:18.297 Total : 2384.51 9.31 0.00 0.00 53493.95 6225.92 143305.39 00:21:18.297 0 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2813681 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2813681 ']' 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2813681 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2813681 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2813681' 00:21:18.297 killing process with pid 2813681 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2813681 00:21:18.297 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.297 00:21:18.297 Latency(us) 00:21:18.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.297 =================================================================================================================== 00:21:18.297 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.297 [2024-07-15 22:18:41.568165] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2813681 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2813350 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2813350 ']' 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2813350 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2813350 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2813350' 00:21:18.297 killing process with pid 2813350 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2813350 00:21:18.297 [2024-07-15 22:18:41.736541] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2813350 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2815729 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2815729 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2815729 ']' 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:18.297 22:18:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.297 [2024-07-15 22:18:41.911882] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:21:18.297 [2024-07-15 22:18:41.911936] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.297 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.297 [2024-07-15 22:18:41.977379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.297 [2024-07-15 22:18:42.040915] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.297 [2024-07-15 22:18:42.040956] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.297 [2024-07-15 22:18:42.040963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.297 [2024-07-15 22:18:42.040969] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.297 [2024-07-15 22:18:42.040975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.297 [2024-07-15 22:18:42.040995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.297 22:18:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:18.297 22:18:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:18.297 22:18:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:18.297 22:18:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:18.297 22:18:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.297 22:18:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.297 22:18:42 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.tz3RWmovQo 00:21:18.297 22:18:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tz3RWmovQo 00:21:18.297 22:18:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:18.297 [2024-07-15 22:18:42.867927] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.297 22:18:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:18.297 22:18:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:18.297 [2024-07-15 22:18:43.200756] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.297 [2024-07-15 22:18:43.200934] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.297 22:18:43 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:18.297 malloc0 00:21:18.297 22:18:43 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:18.297 22:18:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tz3RWmovQo 00:21:18.558 [2024-07-15 22:18:43.700599] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:18.558 22:18:43 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2816158 00:21:18.558 22:18:43 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.558 22:18:43 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:18.558 22:18:43 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2816158 /var/tmp/bdevperf.sock 00:21:18.558 22:18:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2816158 ']' 00:21:18.558 22:18:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.558 22:18:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:18.558 22:18:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.558 22:18:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:18.558 22:18:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.558 [2024-07-15 22:18:43.786416] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:21:18.558 [2024-07-15 22:18:43.786468] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2816158 ] 00:21:18.558 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.558 [2024-07-15 22:18:43.860443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.819 [2024-07-15 22:18:43.914207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.426 22:18:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:19.426 22:18:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:19.426 22:18:44 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tz3RWmovQo 00:21:19.426 22:18:44 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:19.687 [2024-07-15 22:18:44.807972] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.687 nvme0n1 00:21:19.687 22:18:44 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:19.687 Running I/O for 1 seconds... 00:21:21.074 00:21:21.074 Latency(us) 00:21:21.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.074 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:21.074 Verification LBA range: start 0x0 length 0x2000 00:21:21.074 nvme0n1 : 1.07 1686.82 6.59 0.00 0.00 73685.61 5188.27 128450.56 00:21:21.074 =================================================================================================================== 00:21:21.074 Total : 1686.82 6.59 0.00 0.00 73685.61 5188.27 128450.56 00:21:21.074 0 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2816158 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2816158 ']' 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2816158 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2816158 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2816158' 00:21:21.074 killing process with pid 2816158 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2816158 00:21:21.074 Received shutdown signal, test time was about 1.000000 seconds 00:21:21.074 00:21:21.074 Latency(us) 00:21:21.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.074 =================================================================================================================== 00:21:21.074 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2816158 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2815729 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2815729 ']' 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2815729 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2815729 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2815729' 00:21:21.074 killing process with pid 2815729 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2815729 00:21:21.074 [2024-07-15 22:18:46.280811] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:21.074 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2815729 00:21:21.335 22:18:46 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:21.335 22:18:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:21.335 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:21.335 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.335 22:18:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2816740 00:21:21.335 22:18:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2816740 00:21:21.335 22:18:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:21.335 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2816740 ']' 00:21:21.335 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.335 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.335 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.335 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.335 22:18:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.335 [2024-07-15 22:18:46.478238] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:21:21.335 [2024-07-15 22:18:46.478290] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.335 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.335 [2024-07-15 22:18:46.562232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.335 [2024-07-15 22:18:46.633599] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.335 [2024-07-15 22:18:46.633641] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.335 [2024-07-15 22:18:46.633649] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.335 [2024-07-15 22:18:46.633656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.335 [2024-07-15 22:18:46.633662] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.335 [2024-07-15 22:18:46.633689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.275 22:18:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.275 22:18:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:22.275 22:18:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:22.275 22:18:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:22.275 22:18:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.275 22:18:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.275 22:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:22.275 22:18:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.275 22:18:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.275 [2024-07-15 22:18:47.368197] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.275 malloc0 00:21:22.275 [2024-07-15 22:18:47.394982] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.275 [2024-07-15 22:18:47.395177] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.275 22:18:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.275 22:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2816971 00:21:22.275 22:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2816971 /var/tmp/bdevperf.sock 00:21:22.275 22:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:22.275 22:18:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2816971 ']' 00:21:22.275 22:18:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.276 22:18:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.276 22:18:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.276 22:18:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.276 22:18:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.276 [2024-07-15 22:18:47.471023] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:21:22.276 [2024-07-15 22:18:47.471068] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2816971 ] 00:21:22.276 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.276 [2024-07-15 22:18:47.546126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.276 [2024-07-15 22:18:47.599967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.217 22:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.217 22:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:23.217 22:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tz3RWmovQo 00:21:23.217 22:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:23.477 [2024-07-15 22:18:48.545770] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.477 nvme0n1 00:21:23.477 22:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:23.477 Running I/O for 1 seconds... 00:21:24.861 00:21:24.862 Latency(us) 00:21:24.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.862 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:24.862 Verification LBA range: start 0x0 length 0x2000 00:21:24.862 nvme0n1 : 1.04 2266.72 8.85 0.00 0.00 55534.71 5789.01 93061.12 00:21:24.862 =================================================================================================================== 00:21:24.862 Total : 2266.72 8.85 0.00 0.00 55534.71 5789.01 93061.12 00:21:24.862 0 00:21:24.862 22:18:49 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:24.862 22:18:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.862 22:18:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.862 22:18:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.862 22:18:49 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:24.862 "subsystems": [ 00:21:24.862 { 00:21:24.862 "subsystem": "keyring", 00:21:24.862 "config": [ 00:21:24.862 { 00:21:24.862 "method": "keyring_file_add_key", 00:21:24.862 "params": { 00:21:24.862 "name": "key0", 00:21:24.862 "path": "/tmp/tmp.tz3RWmovQo" 00:21:24.862 } 00:21:24.862 } 00:21:24.862 ] 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "subsystem": "iobuf", 00:21:24.862 "config": [ 00:21:24.862 { 00:21:24.862 "method": "iobuf_set_options", 00:21:24.862 "params": { 00:21:24.862 "small_pool_count": 8192, 00:21:24.862 "large_pool_count": 1024, 00:21:24.862 "small_bufsize": 8192, 00:21:24.862 "large_bufsize": 135168 00:21:24.862 } 00:21:24.862 } 00:21:24.862 ] 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "subsystem": "sock", 00:21:24.862 "config": [ 00:21:24.862 { 00:21:24.862 "method": "sock_set_default_impl", 00:21:24.862 "params": { 00:21:24.862 "impl_name": "posix" 00:21:24.862 } 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "method": "sock_impl_set_options", 00:21:24.862 "params": { 00:21:24.862 "impl_name": "ssl", 00:21:24.862 "recv_buf_size": 4096, 00:21:24.862 "send_buf_size": 4096, 00:21:24.862 "enable_recv_pipe": true, 00:21:24.862 "enable_quickack": false, 00:21:24.862 "enable_placement_id": 0, 00:21:24.862 "enable_zerocopy_send_server": true, 00:21:24.862 "enable_zerocopy_send_client": false, 00:21:24.862 "zerocopy_threshold": 0, 00:21:24.862 "tls_version": 0, 00:21:24.862 "enable_ktls": false 00:21:24.862 } 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "method": "sock_impl_set_options", 00:21:24.862 "params": { 00:21:24.862 "impl_name": "posix", 00:21:24.862 "recv_buf_size": 2097152, 00:21:24.862 "send_buf_size": 2097152, 00:21:24.862 "enable_recv_pipe": true, 00:21:24.862 "enable_quickack": false, 00:21:24.862 "enable_placement_id": 0, 00:21:24.862 "enable_zerocopy_send_server": true, 00:21:24.862 "enable_zerocopy_send_client": false, 00:21:24.862 "zerocopy_threshold": 0, 00:21:24.862 "tls_version": 0, 00:21:24.862 "enable_ktls": false 00:21:24.862 } 00:21:24.862 } 00:21:24.862 ] 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "subsystem": "vmd", 00:21:24.862 "config": [] 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "subsystem": "accel", 00:21:24.862 "config": [ 00:21:24.862 { 00:21:24.862 "method": "accel_set_options", 00:21:24.862 "params": { 00:21:24.862 "small_cache_size": 128, 00:21:24.862 "large_cache_size": 16, 00:21:24.862 "task_count": 2048, 00:21:24.862 "sequence_count": 2048, 00:21:24.862 "buf_count": 2048 00:21:24.862 } 00:21:24.862 } 00:21:24.862 ] 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "subsystem": "bdev", 00:21:24.862 "config": [ 00:21:24.862 { 00:21:24.862 "method": "bdev_set_options", 00:21:24.862 "params": { 00:21:24.862 "bdev_io_pool_size": 65535, 00:21:24.862 "bdev_io_cache_size": 256, 00:21:24.862 "bdev_auto_examine": true, 00:21:24.862 "iobuf_small_cache_size": 128, 00:21:24.862 "iobuf_large_cache_size": 16 00:21:24.862 } 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "method": "bdev_raid_set_options", 00:21:24.862 "params": { 00:21:24.862 "process_window_size_kb": 1024 00:21:24.862 } 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "method": "bdev_iscsi_set_options", 00:21:24.862 "params": { 00:21:24.862 "timeout_sec": 30 00:21:24.862 } 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "method": "bdev_nvme_set_options", 00:21:24.862 "params": { 00:21:24.862 "action_on_timeout": "none", 00:21:24.862 "timeout_us": 0, 00:21:24.862 "timeout_admin_us": 0, 00:21:24.862 "keep_alive_timeout_ms": 10000, 00:21:24.862 "arbitration_burst": 0, 00:21:24.862 "low_priority_weight": 0, 00:21:24.862 "medium_priority_weight": 0, 00:21:24.862 "high_priority_weight": 0, 00:21:24.862 "nvme_adminq_poll_period_us": 10000, 00:21:24.862 "nvme_ioq_poll_period_us": 0, 00:21:24.862 "io_queue_requests": 0, 00:21:24.862 "delay_cmd_submit": true, 00:21:24.862 "transport_retry_count": 4, 00:21:24.862 "bdev_retry_count": 3, 00:21:24.862 "transport_ack_timeout": 0, 00:21:24.862 "ctrlr_loss_timeout_sec": 0, 00:21:24.862 "reconnect_delay_sec": 0, 00:21:24.862 "fast_io_fail_timeout_sec": 0, 00:21:24.862 "disable_auto_failback": false, 00:21:24.862 "generate_uuids": false, 00:21:24.862 "transport_tos": 0, 00:21:24.862 "nvme_error_stat": false, 00:21:24.862 "rdma_srq_size": 0, 00:21:24.862 "io_path_stat": false, 00:21:24.862 "allow_accel_sequence": false, 00:21:24.862 "rdma_max_cq_size": 0, 00:21:24.862 "rdma_cm_event_timeout_ms": 0, 00:21:24.862 "dhchap_digests": [ 00:21:24.862 "sha256", 00:21:24.862 "sha384", 00:21:24.862 "sha512" 00:21:24.862 ], 00:21:24.862 "dhchap_dhgroups": [ 00:21:24.862 "null", 00:21:24.862 "ffdhe2048", 00:21:24.862 "ffdhe3072", 00:21:24.862 "ffdhe4096", 00:21:24.862 "ffdhe6144", 00:21:24.862 "ffdhe8192" 00:21:24.862 ] 00:21:24.862 } 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "method": "bdev_nvme_set_hotplug", 00:21:24.862 "params": { 00:21:24.862 "period_us": 100000, 00:21:24.862 "enable": false 00:21:24.862 } 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "method": "bdev_malloc_create", 00:21:24.862 "params": { 00:21:24.862 "name": "malloc0", 00:21:24.862 "num_blocks": 8192, 00:21:24.862 "block_size": 4096, 00:21:24.862 "physical_block_size": 4096, 00:21:24.862 "uuid": "2745431d-b53c-44fd-a245-6d188883e04b", 00:21:24.862 "optimal_io_boundary": 0 00:21:24.862 } 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "method": "bdev_wait_for_examine" 00:21:24.862 } 00:21:24.862 ] 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "subsystem": "nbd", 00:21:24.862 "config": [] 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "subsystem": "scheduler", 00:21:24.862 "config": [ 00:21:24.862 { 00:21:24.862 "method": "framework_set_scheduler", 00:21:24.862 "params": { 00:21:24.862 "name": "static" 00:21:24.862 } 00:21:24.862 } 00:21:24.862 ] 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "subsystem": "nvmf", 00:21:24.862 "config": [ 00:21:24.862 { 00:21:24.862 "method": "nvmf_set_config", 00:21:24.862 "params": { 00:21:24.862 "discovery_filter": "match_any", 00:21:24.862 "admin_cmd_passthru": { 00:21:24.862 "identify_ctrlr": false 00:21:24.862 } 00:21:24.862 } 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "method": "nvmf_set_max_subsystems", 00:21:24.862 "params": { 00:21:24.862 "max_subsystems": 1024 00:21:24.862 } 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "method": "nvmf_set_crdt", 00:21:24.862 "params": { 00:21:24.862 "crdt1": 0, 00:21:24.862 "crdt2": 0, 00:21:24.862 "crdt3": 0 00:21:24.862 } 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "method": "nvmf_create_transport", 00:21:24.862 "params": { 00:21:24.862 "trtype": "TCP", 00:21:24.862 "max_queue_depth": 128, 00:21:24.862 "max_io_qpairs_per_ctrlr": 127, 00:21:24.862 "in_capsule_data_size": 4096, 00:21:24.862 "max_io_size": 131072, 00:21:24.862 "io_unit_size": 131072, 00:21:24.862 "max_aq_depth": 128, 00:21:24.862 "num_shared_buffers": 511, 00:21:24.862 "buf_cache_size": 4294967295, 00:21:24.862 "dif_insert_or_strip": false, 00:21:24.862 "zcopy": false, 00:21:24.862 "c2h_success": false, 00:21:24.862 "sock_priority": 0, 00:21:24.862 "abort_timeout_sec": 1, 00:21:24.862 "ack_timeout": 0, 00:21:24.862 "data_wr_pool_size": 0 00:21:24.862 } 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "method": "nvmf_create_subsystem", 00:21:24.862 "params": { 00:21:24.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.862 "allow_any_host": false, 00:21:24.862 "serial_number": "00000000000000000000", 00:21:24.862 "model_number": "SPDK bdev Controller", 00:21:24.862 "max_namespaces": 32, 00:21:24.862 "min_cntlid": 1, 00:21:24.862 "max_cntlid": 65519, 00:21:24.862 "ana_reporting": false 00:21:24.862 } 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "method": "nvmf_subsystem_add_host", 00:21:24.862 "params": { 00:21:24.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.862 "host": "nqn.2016-06.io.spdk:host1", 00:21:24.862 "psk": "key0" 00:21:24.862 } 00:21:24.862 }, 00:21:24.862 { 00:21:24.862 "method": "nvmf_subsystem_add_ns", 00:21:24.862 "params": { 00:21:24.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.862 "namespace": { 00:21:24.862 "nsid": 1, 00:21:24.863 "bdev_name": "malloc0", 00:21:24.863 "nguid": "2745431DB53C44FDA2456D188883E04B", 00:21:24.863 "uuid": "2745431d-b53c-44fd-a245-6d188883e04b", 00:21:24.863 "no_auto_visible": false 00:21:24.863 } 00:21:24.863 } 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "method": "nvmf_subsystem_add_listener", 00:21:24.863 "params": { 00:21:24.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.863 "listen_address": { 00:21:24.863 "trtype": "TCP", 00:21:24.863 "adrfam": "IPv4", 00:21:24.863 "traddr": "10.0.0.2", 00:21:24.863 "trsvcid": "4420" 00:21:24.863 }, 00:21:24.863 "secure_channel": false, 00:21:24.863 "sock_impl": "ssl" 00:21:24.863 } 00:21:24.863 } 00:21:24.863 ] 00:21:24.863 } 00:21:24.863 ] 00:21:24.863 }' 00:21:24.863 22:18:49 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:24.863 22:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:24.863 "subsystems": [ 00:21:24.863 { 00:21:24.863 "subsystem": "keyring", 00:21:24.863 "config": [ 00:21:24.863 { 00:21:24.863 "method": "keyring_file_add_key", 00:21:24.863 "params": { 00:21:24.863 "name": "key0", 00:21:24.863 "path": "/tmp/tmp.tz3RWmovQo" 00:21:24.863 } 00:21:24.863 } 00:21:24.863 ] 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "subsystem": "iobuf", 00:21:24.863 "config": [ 00:21:24.863 { 00:21:24.863 "method": "iobuf_set_options", 00:21:24.863 "params": { 00:21:24.863 "small_pool_count": 8192, 00:21:24.863 "large_pool_count": 1024, 00:21:24.863 "small_bufsize": 8192, 00:21:24.863 "large_bufsize": 135168 00:21:24.863 } 00:21:24.863 } 00:21:24.863 ] 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "subsystem": "sock", 00:21:24.863 "config": [ 00:21:24.863 { 00:21:24.863 "method": "sock_set_default_impl", 00:21:24.863 "params": { 00:21:24.863 "impl_name": "posix" 00:21:24.863 } 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "method": "sock_impl_set_options", 00:21:24.863 "params": { 00:21:24.863 "impl_name": "ssl", 00:21:24.863 "recv_buf_size": 4096, 00:21:24.863 "send_buf_size": 4096, 00:21:24.863 "enable_recv_pipe": true, 00:21:24.863 "enable_quickack": false, 00:21:24.863 "enable_placement_id": 0, 00:21:24.863 "enable_zerocopy_send_server": true, 00:21:24.863 "enable_zerocopy_send_client": false, 00:21:24.863 "zerocopy_threshold": 0, 00:21:24.863 "tls_version": 0, 00:21:24.863 "enable_ktls": false 00:21:24.863 } 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "method": "sock_impl_set_options", 00:21:24.863 "params": { 00:21:24.863 "impl_name": "posix", 00:21:24.863 "recv_buf_size": 2097152, 00:21:24.863 "send_buf_size": 2097152, 00:21:24.863 "enable_recv_pipe": true, 00:21:24.863 "enable_quickack": false, 00:21:24.863 "enable_placement_id": 0, 00:21:24.863 "enable_zerocopy_send_server": true, 00:21:24.863 "enable_zerocopy_send_client": false, 00:21:24.863 "zerocopy_threshold": 0, 00:21:24.863 "tls_version": 0, 00:21:24.863 "enable_ktls": false 00:21:24.863 } 00:21:24.863 } 00:21:24.863 ] 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "subsystem": "vmd", 00:21:24.863 "config": [] 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "subsystem": "accel", 00:21:24.863 "config": [ 00:21:24.863 { 00:21:24.863 "method": "accel_set_options", 00:21:24.863 "params": { 00:21:24.863 "small_cache_size": 128, 00:21:24.863 "large_cache_size": 16, 00:21:24.863 "task_count": 2048, 00:21:24.863 "sequence_count": 2048, 00:21:24.863 "buf_count": 2048 00:21:24.863 } 00:21:24.863 } 00:21:24.863 ] 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "subsystem": "bdev", 00:21:24.863 "config": [ 00:21:24.863 { 00:21:24.863 "method": "bdev_set_options", 00:21:24.863 "params": { 00:21:24.863 "bdev_io_pool_size": 65535, 00:21:24.863 "bdev_io_cache_size": 256, 00:21:24.863 "bdev_auto_examine": true, 00:21:24.863 "iobuf_small_cache_size": 128, 00:21:24.863 "iobuf_large_cache_size": 16 00:21:24.863 } 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "method": "bdev_raid_set_options", 00:21:24.863 "params": { 00:21:24.863 "process_window_size_kb": 1024 00:21:24.863 } 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "method": "bdev_iscsi_set_options", 00:21:24.863 "params": { 00:21:24.863 "timeout_sec": 30 00:21:24.863 } 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "method": "bdev_nvme_set_options", 00:21:24.863 "params": { 00:21:24.863 "action_on_timeout": "none", 00:21:24.863 "timeout_us": 0, 00:21:24.863 "timeout_admin_us": 0, 00:21:24.863 "keep_alive_timeout_ms": 10000, 00:21:24.863 "arbitration_burst": 0, 00:21:24.863 "low_priority_weight": 0, 00:21:24.863 "medium_priority_weight": 0, 00:21:24.863 "high_priority_weight": 0, 00:21:24.863 "nvme_adminq_poll_period_us": 10000, 00:21:24.863 "nvme_ioq_poll_period_us": 0, 00:21:24.863 "io_queue_requests": 512, 00:21:24.863 "delay_cmd_submit": true, 00:21:24.863 "transport_retry_count": 4, 00:21:24.863 "bdev_retry_count": 3, 00:21:24.863 "transport_ack_timeout": 0, 00:21:24.863 "ctrlr_loss_timeout_sec": 0, 00:21:24.863 "reconnect_delay_sec": 0, 00:21:24.863 "fast_io_fail_timeout_sec": 0, 00:21:24.863 "disable_auto_failback": false, 00:21:24.863 "generate_uuids": false, 00:21:24.863 "transport_tos": 0, 00:21:24.863 "nvme_error_stat": false, 00:21:24.863 "rdma_srq_size": 0, 00:21:24.863 "io_path_stat": false, 00:21:24.863 "allow_accel_sequence": false, 00:21:24.863 "rdma_max_cq_size": 0, 00:21:24.863 "rdma_cm_event_timeout_ms": 0, 00:21:24.863 "dhchap_digests": [ 00:21:24.863 "sha256", 00:21:24.863 "sha384", 00:21:24.863 "sha512" 00:21:24.863 ], 00:21:24.863 "dhchap_dhgroups": [ 00:21:24.863 "null", 00:21:24.863 "ffdhe2048", 00:21:24.863 "ffdhe3072", 00:21:24.863 "ffdhe4096", 00:21:24.863 "ffdhe6144", 00:21:24.863 "ffdhe8192" 00:21:24.863 ] 00:21:24.863 } 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "method": "bdev_nvme_attach_controller", 00:21:24.863 "params": { 00:21:24.863 "name": "nvme0", 00:21:24.863 "trtype": "TCP", 00:21:24.863 "adrfam": "IPv4", 00:21:24.863 "traddr": "10.0.0.2", 00:21:24.863 "trsvcid": "4420", 00:21:24.863 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.863 "prchk_reftag": false, 00:21:24.863 "prchk_guard": false, 00:21:24.863 "ctrlr_loss_timeout_sec": 0, 00:21:24.863 "reconnect_delay_sec": 0, 00:21:24.863 "fast_io_fail_timeout_sec": 0, 00:21:24.863 "psk": "key0", 00:21:24.863 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:24.863 "hdgst": false, 00:21:24.863 "ddgst": false 00:21:24.863 } 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "method": "bdev_nvme_set_hotplug", 00:21:24.863 "params": { 00:21:24.863 "period_us": 100000, 00:21:24.863 "enable": false 00:21:24.863 } 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "method": "bdev_enable_histogram", 00:21:24.863 "params": { 00:21:24.863 "name": "nvme0n1", 00:21:24.863 "enable": true 00:21:24.863 } 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "method": "bdev_wait_for_examine" 00:21:24.863 } 00:21:24.863 ] 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "subsystem": "nbd", 00:21:24.863 "config": [] 00:21:24.863 } 00:21:24.863 ] 00:21:24.863 }' 00:21:24.863 22:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 2816971 00:21:24.863 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2816971 ']' 00:21:24.863 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2816971 00:21:24.863 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:24.863 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.863 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2816971 00:21:25.124 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:25.124 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:25.124 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2816971' 00:21:25.124 killing process with pid 2816971 00:21:25.124 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2816971 00:21:25.124 Received shutdown signal, test time was about 1.000000 seconds 00:21:25.124 00:21:25.124 Latency(us) 00:21:25.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.124 =================================================================================================================== 00:21:25.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.124 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2816971 00:21:25.124 22:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 2816740 00:21:25.124 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2816740 ']' 00:21:25.124 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2816740 00:21:25.124 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:25.124 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.124 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2816740 00:21:25.124 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:25.124 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:25.124 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2816740' 00:21:25.124 killing process with pid 2816740 00:21:25.124 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2816740 00:21:25.124 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2816740 00:21:25.385 22:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:25.385 22:18:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:25.385 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:25.385 22:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:25.385 "subsystems": [ 00:21:25.385 { 00:21:25.385 "subsystem": "keyring", 00:21:25.385 "config": [ 00:21:25.385 { 00:21:25.385 "method": "keyring_file_add_key", 00:21:25.385 "params": { 00:21:25.385 "name": "key0", 00:21:25.385 "path": "/tmp/tmp.tz3RWmovQo" 00:21:25.385 } 00:21:25.385 } 00:21:25.385 ] 00:21:25.385 }, 00:21:25.385 { 00:21:25.385 "subsystem": "iobuf", 00:21:25.385 "config": [ 00:21:25.385 { 00:21:25.385 "method": "iobuf_set_options", 00:21:25.385 "params": { 00:21:25.385 "small_pool_count": 8192, 00:21:25.385 "large_pool_count": 1024, 00:21:25.385 "small_bufsize": 8192, 00:21:25.385 "large_bufsize": 135168 00:21:25.385 } 00:21:25.385 } 00:21:25.385 ] 00:21:25.385 }, 00:21:25.385 { 00:21:25.385 "subsystem": "sock", 00:21:25.385 "config": [ 00:21:25.385 { 00:21:25.385 "method": "sock_set_default_impl", 00:21:25.385 "params": { 00:21:25.385 "impl_name": "posix" 00:21:25.385 } 00:21:25.385 }, 00:21:25.385 { 00:21:25.385 "method": "sock_impl_set_options", 00:21:25.385 "params": { 00:21:25.385 "impl_name": "ssl", 00:21:25.385 "recv_buf_size": 4096, 00:21:25.385 "send_buf_size": 4096, 00:21:25.385 "enable_recv_pipe": true, 00:21:25.385 "enable_quickack": false, 00:21:25.385 "enable_placement_id": 0, 00:21:25.385 "enable_zerocopy_send_server": true, 00:21:25.385 "enable_zerocopy_send_client": false, 00:21:25.385 "zerocopy_threshold": 0, 00:21:25.385 "tls_version": 0, 00:21:25.385 "enable_ktls": false 00:21:25.385 } 00:21:25.385 }, 00:21:25.385 { 00:21:25.385 "method": "sock_impl_set_options", 00:21:25.385 "params": { 00:21:25.385 "impl_name": "posix", 00:21:25.385 "recv_buf_size": 2097152, 00:21:25.385 "send_buf_size": 2097152, 00:21:25.385 "enable_recv_pipe": true, 00:21:25.385 "enable_quickack": false, 00:21:25.385 "enable_placement_id": 0, 00:21:25.385 "enable_zerocopy_send_server": true, 00:21:25.385 "enable_zerocopy_send_client": false, 00:21:25.385 "zerocopy_threshold": 0, 00:21:25.385 "tls_version": 0, 00:21:25.385 "enable_ktls": false 00:21:25.385 } 00:21:25.385 } 00:21:25.385 ] 00:21:25.385 }, 00:21:25.385 { 00:21:25.385 "subsystem": "vmd", 00:21:25.385 "config": [] 00:21:25.385 }, 00:21:25.385 { 00:21:25.385 "subsystem": "accel", 00:21:25.385 "config": [ 00:21:25.385 { 00:21:25.385 "method": "accel_set_options", 00:21:25.385 "params": { 00:21:25.385 "small_cache_size": 128, 00:21:25.385 "large_cache_size": 16, 00:21:25.385 "task_count": 2048, 00:21:25.385 "sequence_count": 2048, 00:21:25.385 "buf_count": 2048 00:21:25.385 } 00:21:25.385 } 00:21:25.385 ] 00:21:25.385 }, 00:21:25.385 { 00:21:25.385 "subsystem": "bdev", 00:21:25.385 "config": [ 00:21:25.385 { 00:21:25.385 "method": "bdev_set_options", 00:21:25.385 "params": { 00:21:25.385 "bdev_io_pool_size": 65535, 00:21:25.385 "bdev_io_cache_size": 256, 00:21:25.385 "bdev_auto_examine": true, 00:21:25.385 "iobuf_small_cache_size": 128, 00:21:25.385 "iobuf_large_cache_size": 16 00:21:25.386 } 00:21:25.386 }, 00:21:25.386 { 00:21:25.386 "method": "bdev_raid_set_options", 00:21:25.386 "params": { 00:21:25.386 "process_window_size_kb": 1024 00:21:25.386 } 00:21:25.386 }, 00:21:25.386 { 00:21:25.386 "method": "bdev_iscsi_set_options", 00:21:25.386 "params": { 00:21:25.386 "timeout_sec": 30 00:21:25.386 } 00:21:25.386 }, 00:21:25.386 { 00:21:25.386 "method": "bdev_nvme_set_options", 00:21:25.386 "params": { 00:21:25.386 "action_on_timeout": "none", 00:21:25.386 "timeout_us": 0, 00:21:25.386 "timeout_admin_us": 0, 00:21:25.386 "keep_alive_timeout_ms": 10000, 00:21:25.386 "arbitration_burst": 0, 00:21:25.386 "low_priority_weight": 0, 00:21:25.386 "medium_priority_weight": 0, 00:21:25.386 "high_priority_weight": 0, 00:21:25.386 "nvme_adminq_poll_period_us": 10000, 00:21:25.386 "nvme_ioq_poll_period_us": 0, 00:21:25.386 "io_queue_requests": 0, 00:21:25.386 "delay_cmd_submit": true, 00:21:25.386 "transport_retry_count": 4, 00:21:25.386 "bdev_retry_count": 3, 00:21:25.386 "transport_ack_timeout": 0, 00:21:25.386 "ctrlr_loss_timeout_sec": 0, 00:21:25.386 "reconnect_delay_sec": 0, 00:21:25.386 "fast_io_fail_timeout_sec": 0, 00:21:25.386 "disable_auto_failback": false, 00:21:25.386 "generate_uuids": false, 00:21:25.386 "transport_tos": 0, 00:21:25.386 "nvme_error_stat": false, 00:21:25.386 "rdma_srq_size": 0, 00:21:25.386 "io_path_stat": false, 00:21:25.386 "allow_accel_sequence": false, 00:21:25.386 "rdma_max_cq_size": 0, 00:21:25.386 "rdma_cm_event_timeout_ms": 0, 00:21:25.386 "dhchap_digests": [ 00:21:25.386 "sha256", 00:21:25.386 "sha384", 00:21:25.386 "sha512" 00:21:25.386 ], 00:21:25.386 "dhchap_dhgroups": [ 00:21:25.386 "null", 00:21:25.386 "ffdhe2048", 00:21:25.386 "ffdhe3072", 00:21:25.386 "ffdhe4096", 00:21:25.386 "ffdhe6144", 00:21:25.386 "ffdhe8192" 00:21:25.386 ] 00:21:25.386 } 00:21:25.386 }, 00:21:25.386 { 00:21:25.386 "method": "bdev_nvme_set_hotplug", 00:21:25.386 "params": { 00:21:25.386 "period_us": 100000, 00:21:25.386 "enable": false 00:21:25.386 } 00:21:25.386 }, 00:21:25.386 { 00:21:25.386 "method": "bdev_malloc_create", 00:21:25.386 "params": { 00:21:25.386 "name": "malloc0", 00:21:25.386 "num_blocks": 8192, 00:21:25.386 "block_size": 4096, 00:21:25.386 "physical_block_size": 4096, 00:21:25.386 "uuid": "2745431d-b53c-44fd-a245-6d188883e04b", 00:21:25.386 "optimal_io_boundary": 0 00:21:25.386 } 00:21:25.386 }, 00:21:25.386 { 00:21:25.386 "method": "bdev_wait_for_examine" 00:21:25.386 } 00:21:25.386 ] 00:21:25.386 }, 00:21:25.386 { 00:21:25.386 "subsystem": "nbd", 00:21:25.386 "config": [] 00:21:25.386 }, 00:21:25.386 { 00:21:25.386 "subsystem": "scheduler", 00:21:25.386 "config": [ 00:21:25.386 { 00:21:25.386 "method": "framework_set_scheduler", 00:21:25.386 "params": { 00:21:25.386 "name": "static" 00:21:25.386 } 00:21:25.386 } 00:21:25.386 ] 00:21:25.386 }, 00:21:25.386 { 00:21:25.386 "subsystem": "nvmf", 00:21:25.386 "config": [ 00:21:25.386 { 00:21:25.386 "method": "nvmf_set_config", 00:21:25.386 "params": { 00:21:25.386 "discovery_filter": "match_any", 00:21:25.386 "admin_cmd_passthru": { 00:21:25.386 "identify_ctrlr": false 00:21:25.386 } 00:21:25.386 } 00:21:25.386 }, 00:21:25.386 { 00:21:25.386 "method": "nvmf_set_max_subsystems", 00:21:25.386 "params": { 00:21:25.386 "max_subsystems": 1024 00:21:25.386 } 00:21:25.386 }, 00:21:25.386 { 00:21:25.386 "method": "nvmf_set_crdt", 00:21:25.386 "params": { 00:21:25.386 "crdt1": 0, 00:21:25.386 "crdt2": 0, 00:21:25.386 "crdt3": 0 00:21:25.386 } 00:21:25.386 }, 00:21:25.386 { 00:21:25.386 "method": "nvmf_create_transport", 00:21:25.386 "params": { 00:21:25.386 "trtype": "TCP", 00:21:25.386 "max_queue_depth": 128, 00:21:25.386 "max_io_qpairs_per_ctrlr": 127, 00:21:25.386 "in_capsule_data_size": 4096, 00:21:25.386 "max_io_size": 131072, 00:21:25.386 "io_unit_size": 131072, 00:21:25.386 "max_aq_depth": 128, 00:21:25.386 "num_shared_buffers": 511, 00:21:25.386 "buf_cache_size": 4294967295, 00:21:25.386 "dif_insert_or_strip": false, 00:21:25.386 "zcopy": false, 00:21:25.386 "c2h_success": false, 00:21:25.386 "sock_priority": 0, 00:21:25.386 "abort_timeout_sec": 1, 00:21:25.386 "ack_timeout": 0, 00:21:25.386 "data_wr_pool_size": 0 00:21:25.386 } 00:21:25.386 }, 00:21:25.386 { 00:21:25.386 "method": "nvmf_create_subsystem", 00:21:25.386 "params": { 00:21:25.386 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.386 "allow_any_host": false, 00:21:25.386 "serial_number": "00000000000000000000", 00:21:25.386 "model_number": "SPDK bdev Controller", 00:21:25.386 "max_namespaces": 32, 00:21:25.386 "min_cntlid": 1, 00:21:25.386 "max_cntlid": 65519, 00:21:25.386 "ana_reporting": false 00:21:25.386 } 00:21:25.386 }, 00:21:25.386 { 00:21:25.386 "method": "nvmf_subsystem_add_host", 00:21:25.386 "params": { 00:21:25.386 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.386 "host": "nqn.2016-06.io.spdk:host1", 00:21:25.386 "psk": "key0" 00:21:25.386 } 00:21:25.386 }, 00:21:25.386 { 00:21:25.386 "method": "nvmf_subsystem_add_ns", 00:21:25.386 "params": { 00:21:25.386 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.386 "namespace": { 00:21:25.386 "nsid": 1, 00:21:25.386 "bdev_name": "malloc0", 00:21:25.386 "nguid": "2745431DB53C44FDA2456D188883E04B", 00:21:25.386 "uuid": "2745431d-b53c-44fd-a245-6d188883e04b", 00:21:25.386 "no_auto_visible": false 00:21:25.386 } 00:21:25.386 } 00:21:25.386 }, 00:21:25.386 { 00:21:25.386 "method": "nvmf_subsystem_add_listener", 00:21:25.386 "params": { 00:21:25.386 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.386 "listen_address": { 00:21:25.386 "trtype": "TCP", 00:21:25.386 "adrfam": "IPv4", 00:21:25.386 "traddr": "10.0.0.2", 00:21:25.386 "trsvcid": "4420" 00:21:25.386 }, 00:21:25.386 "secure_channel": false, 00:21:25.386 "sock_impl": "ssl" 00:21:25.386 } 00:21:25.386 } 00:21:25.386 ] 00:21:25.386 } 00:21:25.386 ] 00:21:25.386 }' 00:21:25.386 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.386 22:18:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2817478 00:21:25.386 22:18:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2817478 00:21:25.386 22:18:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:25.386 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2817478 ']' 00:21:25.386 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.386 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:25.386 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.386 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:25.386 22:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.386 [2024-07-15 22:18:50.548792] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:21:25.386 [2024-07-15 22:18:50.548845] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.386 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.386 [2024-07-15 22:18:50.613147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.386 [2024-07-15 22:18:50.677989] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.386 [2024-07-15 22:18:50.678025] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.386 [2024-07-15 22:18:50.678032] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.386 [2024-07-15 22:18:50.678039] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.386 [2024-07-15 22:18:50.678045] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.386 [2024-07-15 22:18:50.678095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.648 [2024-07-15 22:18:50.875371] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.648 [2024-07-15 22:18:50.907387] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:25.648 [2024-07-15 22:18:50.922299] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.220 22:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:26.220 22:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:26.220 22:18:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:26.220 22:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:26.220 22:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.220 22:18:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.220 22:18:51 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2817803 00:21:26.220 22:18:51 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2817803 /var/tmp/bdevperf.sock 00:21:26.220 22:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2817803 ']' 00:21:26.220 22:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.220 22:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.220 22:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.220 22:18:51 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:26.220 22:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.220 22:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.220 22:18:51 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:26.220 "subsystems": [ 00:21:26.220 { 00:21:26.220 "subsystem": "keyring", 00:21:26.220 "config": [ 00:21:26.220 { 00:21:26.220 "method": "keyring_file_add_key", 00:21:26.220 "params": { 00:21:26.220 "name": "key0", 00:21:26.220 "path": "/tmp/tmp.tz3RWmovQo" 00:21:26.220 } 00:21:26.220 } 00:21:26.220 ] 00:21:26.220 }, 00:21:26.220 { 00:21:26.220 "subsystem": "iobuf", 00:21:26.220 "config": [ 00:21:26.220 { 00:21:26.220 "method": "iobuf_set_options", 00:21:26.220 "params": { 00:21:26.220 "small_pool_count": 8192, 00:21:26.220 "large_pool_count": 1024, 00:21:26.220 "small_bufsize": 8192, 00:21:26.220 "large_bufsize": 135168 00:21:26.220 } 00:21:26.220 } 00:21:26.220 ] 00:21:26.220 }, 00:21:26.220 { 00:21:26.220 "subsystem": "sock", 00:21:26.220 "config": [ 00:21:26.220 { 00:21:26.220 "method": "sock_set_default_impl", 00:21:26.220 "params": { 00:21:26.220 "impl_name": "posix" 00:21:26.220 } 00:21:26.220 }, 00:21:26.220 { 00:21:26.220 "method": "sock_impl_set_options", 00:21:26.220 "params": { 00:21:26.220 "impl_name": "ssl", 00:21:26.220 "recv_buf_size": 4096, 00:21:26.220 "send_buf_size": 4096, 00:21:26.220 "enable_recv_pipe": true, 00:21:26.221 "enable_quickack": false, 00:21:26.221 "enable_placement_id": 0, 00:21:26.221 "enable_zerocopy_send_server": true, 00:21:26.221 "enable_zerocopy_send_client": false, 00:21:26.221 "zerocopy_threshold": 0, 00:21:26.221 "tls_version": 0, 00:21:26.221 "enable_ktls": false 00:21:26.221 } 00:21:26.221 }, 00:21:26.221 { 00:21:26.221 "method": "sock_impl_set_options", 00:21:26.221 "params": { 00:21:26.221 "impl_name": "posix", 00:21:26.221 "recv_buf_size": 2097152, 00:21:26.221 "send_buf_size": 2097152, 00:21:26.221 "enable_recv_pipe": true, 00:21:26.221 "enable_quickack": false, 00:21:26.221 "enable_placement_id": 0, 00:21:26.221 "enable_zerocopy_send_server": true, 00:21:26.221 "enable_zerocopy_send_client": false, 00:21:26.221 "zerocopy_threshold": 0, 00:21:26.221 "tls_version": 0, 00:21:26.221 "enable_ktls": false 00:21:26.221 } 00:21:26.221 } 00:21:26.221 ] 00:21:26.221 }, 00:21:26.221 { 00:21:26.221 "subsystem": "vmd", 00:21:26.221 "config": [] 00:21:26.221 }, 00:21:26.221 { 00:21:26.221 "subsystem": "accel", 00:21:26.221 "config": [ 00:21:26.221 { 00:21:26.221 "method": "accel_set_options", 00:21:26.221 "params": { 00:21:26.221 "small_cache_size": 128, 00:21:26.221 "large_cache_size": 16, 00:21:26.221 "task_count": 2048, 00:21:26.221 "sequence_count": 2048, 00:21:26.221 "buf_count": 2048 00:21:26.221 } 00:21:26.221 } 00:21:26.221 ] 00:21:26.221 }, 00:21:26.221 { 00:21:26.221 "subsystem": "bdev", 00:21:26.221 "config": [ 00:21:26.221 { 00:21:26.221 "method": "bdev_set_options", 00:21:26.221 "params": { 00:21:26.221 "bdev_io_pool_size": 65535, 00:21:26.221 "bdev_io_cache_size": 256, 00:21:26.221 "bdev_auto_examine": true, 00:21:26.221 "iobuf_small_cache_size": 128, 00:21:26.221 "iobuf_large_cache_size": 16 00:21:26.221 } 00:21:26.221 }, 00:21:26.221 { 00:21:26.221 "method": "bdev_raid_set_options", 00:21:26.221 "params": { 00:21:26.221 "process_window_size_kb": 1024 00:21:26.221 } 00:21:26.221 }, 00:21:26.221 { 00:21:26.221 "method": "bdev_iscsi_set_options", 00:21:26.221 "params": { 00:21:26.221 "timeout_sec": 30 00:21:26.221 } 00:21:26.221 }, 00:21:26.221 { 00:21:26.221 "method": "bdev_nvme_set_options", 00:21:26.221 "params": { 00:21:26.221 "action_on_timeout": "none", 00:21:26.221 "timeout_us": 0, 00:21:26.221 "timeout_admin_us": 0, 00:21:26.221 "keep_alive_timeout_ms": 10000, 00:21:26.221 "arbitration_burst": 0, 00:21:26.221 "low_priority_weight": 0, 00:21:26.221 "medium_priority_weight": 0, 00:21:26.221 "high_priority_weight": 0, 00:21:26.221 "nvme_adminq_poll_period_us": 10000, 00:21:26.221 "nvme_ioq_poll_period_us": 0, 00:21:26.221 "io_queue_requests": 512, 00:21:26.221 "delay_cmd_submit": true, 00:21:26.221 "transport_retry_count": 4, 00:21:26.221 "bdev_retry_count": 3, 00:21:26.221 "transport_ack_timeout": 0, 00:21:26.221 "ctrlr_loss_timeout_sec": 0, 00:21:26.221 "reconnect_delay_sec": 0, 00:21:26.221 "fast_io_fail_timeout_sec": 0, 00:21:26.221 "disable_auto_failback": false, 00:21:26.221 "generate_uuids": false, 00:21:26.221 "transport_tos": 0, 00:21:26.221 "nvme_error_stat": false, 00:21:26.221 "rdma_srq_size": 0, 00:21:26.221 "io_path_stat": false, 00:21:26.221 "allow_accel_sequence": false, 00:21:26.221 "rdma_max_cq_size": 0, 00:21:26.221 "rdma_cm_event_timeout_ms": 0, 00:21:26.221 "dhchap_digests": [ 00:21:26.221 "sha256", 00:21:26.221 "sha384", 00:21:26.221 "sha512" 00:21:26.221 ], 00:21:26.221 "dhchap_dhgroups": [ 00:21:26.221 "null", 00:21:26.221 "ffdhe2048", 00:21:26.221 "ffdhe3072", 00:21:26.221 "ffdhe4096", 00:21:26.221 "ffdhe6144", 00:21:26.221 "ffdhe8192" 00:21:26.221 ] 00:21:26.221 } 00:21:26.221 }, 00:21:26.221 { 00:21:26.221 "method": "bdev_nvme_attach_controller", 00:21:26.221 "params": { 00:21:26.221 "name": "nvme0", 00:21:26.221 "trtype": "TCP", 00:21:26.221 "adrfam": "IPv4", 00:21:26.221 "traddr": "10.0.0.2", 00:21:26.221 "trsvcid": "4420", 00:21:26.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.221 "prchk_reftag": false, 00:21:26.221 "prchk_guard": false, 00:21:26.221 "ctrlr_loss_timeout_sec": 0, 00:21:26.221 "reconnect_delay_sec": 0, 00:21:26.221 "fast_io_fail_timeout_sec": 0, 00:21:26.221 "psk": "key0", 00:21:26.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:26.221 "hdgst": false, 00:21:26.221 "ddgst": false 00:21:26.221 } 00:21:26.221 }, 00:21:26.221 { 00:21:26.221 "method": "bdev_nvme_set_hotplug", 00:21:26.221 "params": { 00:21:26.221 "period_us": 100000, 00:21:26.221 "enable": false 00:21:26.221 } 00:21:26.221 }, 00:21:26.221 { 00:21:26.221 "method": "bdev_enable_histogram", 00:21:26.221 "params": { 00:21:26.221 "name": "nvme0n1", 00:21:26.221 "enable": true 00:21:26.221 } 00:21:26.221 }, 00:21:26.221 { 00:21:26.221 "method": "bdev_wait_for_examine" 00:21:26.221 } 00:21:26.221 ] 00:21:26.221 }, 00:21:26.221 { 00:21:26.221 "subsystem": "nbd", 00:21:26.221 "config": [] 00:21:26.221 } 00:21:26.221 ] 00:21:26.221 }' 00:21:26.221 [2024-07-15 22:18:51.398527] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:21:26.221 [2024-07-15 22:18:51.398580] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817803 ] 00:21:26.221 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.221 [2024-07-15 22:18:51.474066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.221 [2024-07-15 22:18:51.527533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.482 [2024-07-15 22:18:51.660933] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.054 22:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.054 22:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:27.054 22:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:27.054 22:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:21:27.054 22:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.054 22:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:27.315 Running I/O for 1 seconds... 00:21:28.258 00:21:28.258 Latency(us) 00:21:28.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.258 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:28.258 Verification LBA range: start 0x0 length 0x2000 00:21:28.258 nvme0n1 : 1.07 1859.56 7.26 0.00 0.00 66766.07 5952.85 113595.73 00:21:28.258 =================================================================================================================== 00:21:28.258 Total : 1859.56 7.26 0.00 0.00 66766.07 5952.85 113595.73 00:21:28.258 0 00:21:28.258 22:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:28.258 22:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:28.258 22:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:28.258 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:28.258 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:28.258 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:28.258 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:28.258 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:28.258 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:28.258 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:28.258 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:28.258 nvmf_trace.0 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2817803 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2817803 ']' 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2817803 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2817803 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2817803' 00:21:28.518 killing process with pid 2817803 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2817803 00:21:28.518 Received shutdown signal, test time was about 1.000000 seconds 00:21:28.518 00:21:28.518 Latency(us) 00:21:28.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.518 =================================================================================================================== 00:21:28.518 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2817803 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:28.518 rmmod nvme_tcp 00:21:28.518 rmmod nvme_fabrics 00:21:28.518 rmmod nvme_keyring 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2817478 ']' 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2817478 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2817478 ']' 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2817478 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.518 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2817478 00:21:28.778 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:28.778 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:28.778 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2817478' 00:21:28.779 killing process with pid 2817478 00:21:28.779 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2817478 00:21:28.779 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2817478 00:21:28.779 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:28.779 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:28.779 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:28.779 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:28.779 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:28.779 22:18:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.779 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.779 22:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.326 22:18:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:31.326 22:18:56 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.OyyQ6uttse /tmp/tmp.gLlyBK6Xmw /tmp/tmp.tz3RWmovQo 00:21:31.326 00:21:31.326 real 1m23.338s 00:21:31.326 user 2m6.766s 00:21:31.326 sys 0m28.830s 00:21:31.326 22:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:31.326 22:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.326 ************************************ 00:21:31.326 END TEST nvmf_tls 00:21:31.326 ************************************ 00:21:31.326 22:18:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:31.326 22:18:56 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:31.326 22:18:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:31.326 22:18:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:31.326 22:18:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:31.326 ************************************ 00:21:31.326 START TEST nvmf_fips 00:21:31.326 ************************************ 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:31.326 * Looking for test storage... 00:21:31.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.326 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:31.327 Error setting digest 00:21:31.327 00E2F43D687F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:31.327 00E2F43D687F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:31.327 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.328 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:31.328 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:31.328 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:31.328 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.328 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.328 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.328 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:31.328 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:31.328 22:18:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:31.328 22:18:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:37.923 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:37.923 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:37.923 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:37.923 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.923 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:38.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:21:38.184 00:21:38.184 --- 10.0.0.2 ping statistics --- 00:21:38.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.184 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:21:38.184 00:21:38.184 --- 10.0.0.1 ping statistics --- 00:21:38.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.184 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:38.184 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:38.444 22:19:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:38.444 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:38.444 22:19:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:38.444 22:19:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:38.444 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2822496 00:21:38.444 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2822496 00:21:38.444 22:19:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:38.444 22:19:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2822496 ']' 00:21:38.444 22:19:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.444 22:19:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.444 22:19:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.444 22:19:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.444 22:19:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:38.444 [2024-07-15 22:19:03.606714] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:21:38.444 [2024-07-15 22:19:03.606784] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.444 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.444 [2024-07-15 22:19:03.693026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.704 [2024-07-15 22:19:03.786134] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.704 [2024-07-15 22:19:03.786190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.704 [2024-07-15 22:19:03.786198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.704 [2024-07-15 22:19:03.786205] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.704 [2024-07-15 22:19:03.786217] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.704 [2024-07-15 22:19:03.786248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.276 22:19:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.276 22:19:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:39.276 22:19:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:39.276 22:19:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:39.276 22:19:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:39.276 22:19:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.276 22:19:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:39.276 22:19:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:39.276 22:19:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.276 22:19:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:39.276 22:19:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.276 22:19:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.276 22:19:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.276 22:19:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:39.276 [2024-07-15 22:19:04.566586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.276 [2024-07-15 22:19:04.582579] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.276 [2024-07-15 22:19:04.582809] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.537 [2024-07-15 22:19:04.612667] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:39.537 malloc0 00:21:39.537 22:19:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:39.537 22:19:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2822557 00:21:39.537 22:19:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2822557 /var/tmp/bdevperf.sock 00:21:39.537 22:19:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:39.537 22:19:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2822557 ']' 00:21:39.537 22:19:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.537 22:19:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.537 22:19:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.537 22:19:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.537 22:19:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:39.537 [2024-07-15 22:19:04.706757] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:21:39.537 [2024-07-15 22:19:04.706846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2822557 ] 00:21:39.537 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.537 [2024-07-15 22:19:04.764076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.537 [2024-07-15 22:19:04.829180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.510 22:19:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:40.510 22:19:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:40.510 22:19:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:40.510 [2024-07-15 22:19:05.613346] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:40.510 [2024-07-15 22:19:05.613411] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:40.510 TLSTESTn1 00:21:40.510 22:19:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:40.510 Running I/O for 10 seconds... 00:21:52.744 00:21:52.744 Latency(us) 00:21:52.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.744 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:52.744 Verification LBA range: start 0x0 length 0x2000 00:21:52.744 TLSTESTn1 : 10.04 3369.06 13.16 0.00 0.00 37910.97 7536.64 64225.28 00:21:52.744 =================================================================================================================== 00:21:52.744 Total : 3369.06 13.16 0.00 0.00 37910.97 7536.64 64225.28 00:21:52.744 0 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:52.744 nvmf_trace.0 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2822557 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2822557 ']' 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2822557 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:52.744 22:19:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2822557 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2822557' 00:21:52.744 killing process with pid 2822557 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2822557 00:21:52.744 Received shutdown signal, test time was about 10.000000 seconds 00:21:52.744 00:21:52.744 Latency(us) 00:21:52.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.744 =================================================================================================================== 00:21:52.744 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:52.744 [2024-07-15 22:19:16.029845] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2822557 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:52.744 rmmod nvme_tcp 00:21:52.744 rmmod nvme_fabrics 00:21:52.744 rmmod nvme_keyring 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2822496 ']' 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2822496 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2822496 ']' 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2822496 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2822496 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2822496' 00:21:52.744 killing process with pid 2822496 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2822496 00:21:52.744 [2024-07-15 22:19:16.260844] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2822496 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.744 22:19:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.316 22:19:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:53.316 22:19:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:53.316 00:21:53.316 real 0m22.296s 00:21:53.316 user 0m23.907s 00:21:53.316 sys 0m9.096s 00:21:53.316 22:19:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:53.316 22:19:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:53.316 ************************************ 00:21:53.316 END TEST nvmf_fips 00:21:53.316 ************************************ 00:21:53.316 22:19:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:53.316 22:19:18 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:53.316 22:19:18 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:53.316 22:19:18 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:53.316 22:19:18 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:53.316 22:19:18 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:53.316 22:19:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:59.905 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:59.905 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:59.905 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:59.905 22:19:25 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.906 22:19:25 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.906 22:19:25 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.906 22:19:25 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:59.906 22:19:25 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.906 22:19:25 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:59.906 22:19:25 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:59.906 22:19:25 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.906 22:19:25 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:59.906 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:59.906 22:19:25 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.906 22:19:25 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:59.906 22:19:25 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.906 22:19:25 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:59.906 22:19:25 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:59.906 22:19:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:59.906 22:19:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:59.906 22:19:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:00.168 ************************************ 00:22:00.168 START TEST nvmf_perf_adq 00:22:00.168 ************************************ 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:00.168 * Looking for test storage... 00:22:00.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:00.168 22:19:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:06.757 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:06.757 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:06.757 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:06.757 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:06.757 22:19:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:08.669 22:19:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:10.618 22:19:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:15.957 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:15.957 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:15.957 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:15.957 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.957 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:15.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:22:15.958 00:22:15.958 --- 10.0.0.2 ping statistics --- 00:22:15.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.958 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.468 ms 00:22:15.958 00:22:15.958 --- 10.0.0.1 ping statistics --- 00:22:15.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.958 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2834429 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2834429 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2834429 ']' 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.958 22:19:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.958 [2024-07-15 22:19:40.940018] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:22:15.958 [2024-07-15 22:19:40.940079] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.958 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.958 [2024-07-15 22:19:41.010301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.958 [2024-07-15 22:19:41.088362] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.958 [2024-07-15 22:19:41.088402] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.958 [2024-07-15 22:19:41.088410] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.958 [2024-07-15 22:19:41.088417] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.958 [2024-07-15 22:19:41.088423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.958 [2024-07-15 22:19:41.088561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.958 [2024-07-15 22:19:41.088683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.958 [2024-07-15 22:19:41.088827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.958 [2024-07-15 22:19:41.088829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.530 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.792 [2024-07-15 22:19:41.899135] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.792 Malloc1 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.792 [2024-07-15 22:19:41.958496] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2834695 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:16.792 22:19:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:16.792 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.707 22:19:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:18.707 22:19:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.707 22:19:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.707 22:19:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.707 22:19:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:18.707 "tick_rate": 2400000000, 00:22:18.707 "poll_groups": [ 00:22:18.707 { 00:22:18.707 "name": "nvmf_tgt_poll_group_000", 00:22:18.707 "admin_qpairs": 1, 00:22:18.707 "io_qpairs": 1, 00:22:18.707 "current_admin_qpairs": 1, 00:22:18.707 "current_io_qpairs": 1, 00:22:18.707 "pending_bdev_io": 0, 00:22:18.707 "completed_nvme_io": 20387, 00:22:18.707 "transports": [ 00:22:18.707 { 00:22:18.707 "trtype": "TCP" 00:22:18.707 } 00:22:18.707 ] 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "name": "nvmf_tgt_poll_group_001", 00:22:18.707 "admin_qpairs": 0, 00:22:18.707 "io_qpairs": 1, 00:22:18.707 "current_admin_qpairs": 0, 00:22:18.707 "current_io_qpairs": 1, 00:22:18.707 "pending_bdev_io": 0, 00:22:18.707 "completed_nvme_io": 29002, 00:22:18.707 "transports": [ 00:22:18.707 { 00:22:18.707 "trtype": "TCP" 00:22:18.707 } 00:22:18.707 ] 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "name": "nvmf_tgt_poll_group_002", 00:22:18.707 "admin_qpairs": 0, 00:22:18.707 "io_qpairs": 1, 00:22:18.707 "current_admin_qpairs": 0, 00:22:18.707 "current_io_qpairs": 1, 00:22:18.707 "pending_bdev_io": 0, 00:22:18.707 "completed_nvme_io": 20638, 00:22:18.707 "transports": [ 00:22:18.707 { 00:22:18.707 "trtype": "TCP" 00:22:18.707 } 00:22:18.707 ] 00:22:18.707 }, 00:22:18.707 { 00:22:18.707 "name": "nvmf_tgt_poll_group_003", 00:22:18.707 "admin_qpairs": 0, 00:22:18.707 "io_qpairs": 1, 00:22:18.707 "current_admin_qpairs": 0, 00:22:18.707 "current_io_qpairs": 1, 00:22:18.707 "pending_bdev_io": 0, 00:22:18.707 "completed_nvme_io": 20250, 00:22:18.707 "transports": [ 00:22:18.707 { 00:22:18.707 "trtype": "TCP" 00:22:18.707 } 00:22:18.707 ] 00:22:18.707 } 00:22:18.707 ] 00:22:18.707 }' 00:22:18.707 22:19:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:18.707 22:19:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:18.968 22:19:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:18.968 22:19:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:18.968 22:19:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2834695 00:22:27.133 Initializing NVMe Controllers 00:22:27.133 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:27.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:27.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:27.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:27.133 Initialization complete. Launching workers. 00:22:27.133 ======================================================== 00:22:27.133 Latency(us) 00:22:27.133 Device Information : IOPS MiB/s Average min max 00:22:27.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13821.10 53.99 4630.84 1118.65 8677.22 00:22:27.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15004.50 58.61 4278.17 1084.05 45443.93 00:22:27.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13528.50 52.85 4730.54 1585.22 11197.25 00:22:27.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11055.60 43.19 5788.76 1654.78 11298.66 00:22:27.133 ======================================================== 00:22:27.133 Total : 53409.70 208.63 4796.70 1084.05 45443.93 00:22:27.133 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:27.133 rmmod nvme_tcp 00:22:27.133 rmmod nvme_fabrics 00:22:27.133 rmmod nvme_keyring 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2834429 ']' 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2834429 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2834429 ']' 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2834429 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2834429 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2834429' 00:22:27.133 killing process with pid 2834429 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2834429 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2834429 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.133 22:19:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.679 22:19:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:29.679 22:19:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:29.679 22:19:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:31.064 22:19:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:32.979 22:19:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:38.271 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:38.271 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:38.271 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.271 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:38.272 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.272 22:20:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:22:38.272 00:22:38.272 --- 10.0.0.2 ping statistics --- 00:22:38.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.272 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:22:38.272 00:22:38.272 --- 10.0.0.1 ping statistics --- 00:22:38.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.272 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:38.272 net.core.busy_poll = 1 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:38.272 net.core.busy_read = 1 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2839244 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2839244 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2839244 ']' 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.272 22:20:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.533 [2024-07-15 22:20:03.648867] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:22:38.533 [2024-07-15 22:20:03.648926] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.533 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.533 [2024-07-15 22:20:03.718014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.533 [2024-07-15 22:20:03.786386] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.533 [2024-07-15 22:20:03.786422] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.533 [2024-07-15 22:20:03.786430] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.533 [2024-07-15 22:20:03.786436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.533 [2024-07-15 22:20:03.786442] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.533 [2024-07-15 22:20:03.786577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.533 [2024-07-15 22:20:03.786695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.533 [2024-07-15 22:20:03.786852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.533 [2024-07-15 22:20:03.786853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.104 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.104 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:39.104 22:20:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.104 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:39.104 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.364 [2024-07-15 22:20:04.587494] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.364 Malloc1 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.364 [2024-07-15 22:20:04.646888] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.364 22:20:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2839576 00:22:39.365 22:20:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:39.365 22:20:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:39.365 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.951 22:20:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:41.951 22:20:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.951 22:20:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.951 22:20:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.951 22:20:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:41.951 "tick_rate": 2400000000, 00:22:41.951 "poll_groups": [ 00:22:41.951 { 00:22:41.951 "name": "nvmf_tgt_poll_group_000", 00:22:41.951 "admin_qpairs": 1, 00:22:41.951 "io_qpairs": 2, 00:22:41.951 "current_admin_qpairs": 1, 00:22:41.951 "current_io_qpairs": 2, 00:22:41.951 "pending_bdev_io": 0, 00:22:41.951 "completed_nvme_io": 38737, 00:22:41.951 "transports": [ 00:22:41.951 { 00:22:41.951 "trtype": "TCP" 00:22:41.951 } 00:22:41.951 ] 00:22:41.951 }, 00:22:41.951 { 00:22:41.951 "name": "nvmf_tgt_poll_group_001", 00:22:41.951 "admin_qpairs": 0, 00:22:41.951 "io_qpairs": 2, 00:22:41.951 "current_admin_qpairs": 0, 00:22:41.951 "current_io_qpairs": 2, 00:22:41.951 "pending_bdev_io": 0, 00:22:41.951 "completed_nvme_io": 40642, 00:22:41.951 "transports": [ 00:22:41.951 { 00:22:41.951 "trtype": "TCP" 00:22:41.951 } 00:22:41.951 ] 00:22:41.951 }, 00:22:41.951 { 00:22:41.951 "name": "nvmf_tgt_poll_group_002", 00:22:41.951 "admin_qpairs": 0, 00:22:41.951 "io_qpairs": 0, 00:22:41.951 "current_admin_qpairs": 0, 00:22:41.951 "current_io_qpairs": 0, 00:22:41.951 "pending_bdev_io": 0, 00:22:41.951 "completed_nvme_io": 0, 00:22:41.951 "transports": [ 00:22:41.951 { 00:22:41.951 "trtype": "TCP" 00:22:41.951 } 00:22:41.951 ] 00:22:41.951 }, 00:22:41.951 { 00:22:41.951 "name": "nvmf_tgt_poll_group_003", 00:22:41.951 "admin_qpairs": 0, 00:22:41.951 "io_qpairs": 0, 00:22:41.951 "current_admin_qpairs": 0, 00:22:41.951 "current_io_qpairs": 0, 00:22:41.951 "pending_bdev_io": 0, 00:22:41.951 "completed_nvme_io": 0, 00:22:41.951 "transports": [ 00:22:41.951 { 00:22:41.951 "trtype": "TCP" 00:22:41.951 } 00:22:41.951 ] 00:22:41.951 } 00:22:41.951 ] 00:22:41.951 }' 00:22:41.951 22:20:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:41.951 22:20:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:41.951 22:20:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:41.951 22:20:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:41.951 22:20:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2839576 00:22:50.081 Initializing NVMe Controllers 00:22:50.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:50.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:50.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:50.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:50.081 Initialization complete. Launching workers. 00:22:50.081 ======================================================== 00:22:50.081 Latency(us) 00:22:50.081 Device Information : IOPS MiB/s Average min max 00:22:50.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10083.00 39.39 6348.95 1292.43 50141.58 00:22:50.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10574.60 41.31 6053.42 1344.32 50427.86 00:22:50.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10366.40 40.49 6174.96 1230.43 49899.74 00:22:50.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9650.40 37.70 6632.33 1292.91 50272.91 00:22:50.081 ======================================================== 00:22:50.081 Total : 40674.40 158.88 6295.01 1230.43 50427.86 00:22:50.081 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.081 rmmod nvme_tcp 00:22:50.081 rmmod nvme_fabrics 00:22:50.081 rmmod nvme_keyring 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2839244 ']' 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2839244 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2839244 ']' 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2839244 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2839244 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2839244' 00:22:50.081 killing process with pid 2839244 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2839244 00:22:50.081 22:20:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2839244 00:22:50.081 22:20:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:50.081 22:20:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:50.081 22:20:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:50.081 22:20:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.081 22:20:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.081 22:20:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.081 22:20:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.081 22:20:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.383 22:20:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.383 22:20:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:53.383 00:22:53.383 real 0m52.874s 00:22:53.383 user 2m44.559s 00:22:53.383 sys 0m12.482s 00:22:53.383 22:20:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:53.383 22:20:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.383 ************************************ 00:22:53.383 END TEST nvmf_perf_adq 00:22:53.383 ************************************ 00:22:53.383 22:20:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:53.383 22:20:18 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:53.383 22:20:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:53.383 22:20:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.383 22:20:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:53.383 ************************************ 00:22:53.383 START TEST nvmf_shutdown 00:22:53.383 ************************************ 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:53.383 * Looking for test storage... 00:22:53.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.383 22:20:18 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:53.384 ************************************ 00:22:53.384 START TEST nvmf_shutdown_tc1 00:22:53.384 ************************************ 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.384 22:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:59.974 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:59.974 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:59.974 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:59.974 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.974 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:23:00.237 00:23:00.237 --- 10.0.0.2 ping statistics --- 00:23:00.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.237 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:23:00.237 00:23:00.237 --- 10.0.0.1 ping statistics --- 00:23:00.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.237 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2845910 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2845910 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2845910 ']' 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.237 22:20:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.499 [2024-07-15 22:20:25.614375] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:23:00.499 [2024-07-15 22:20:25.614443] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.499 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.499 [2024-07-15 22:20:25.692493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.499 [2024-07-15 22:20:25.788280] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.499 [2024-07-15 22:20:25.788338] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.499 [2024-07-15 22:20:25.788346] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.499 [2024-07-15 22:20:25.788353] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.499 [2024-07-15 22:20:25.788359] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.499 [2024-07-15 22:20:25.788538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.499 [2024-07-15 22:20:25.788737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.499 [2024-07-15 22:20:25.788936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.499 [2024-07-15 22:20:25.788937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.443 [2024-07-15 22:20:26.447751] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.443 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.443 Malloc1 00:23:01.443 [2024-07-15 22:20:26.551247] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.443 Malloc2 00:23:01.443 Malloc3 00:23:01.443 Malloc4 00:23:01.443 Malloc5 00:23:01.443 Malloc6 00:23:01.443 Malloc7 00:23:01.705 Malloc8 00:23:01.705 Malloc9 00:23:01.705 Malloc10 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2846143 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2846143 /var/tmp/bdevperf.sock 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2846143 ']' 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.705 { 00:23:01.705 "params": { 00:23:01.705 "name": "Nvme$subsystem", 00:23:01.705 "trtype": "$TEST_TRANSPORT", 00:23:01.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.705 "adrfam": "ipv4", 00:23:01.705 "trsvcid": "$NVMF_PORT", 00:23:01.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.705 "hdgst": ${hdgst:-false}, 00:23:01.705 "ddgst": ${ddgst:-false} 00:23:01.705 }, 00:23:01.705 "method": "bdev_nvme_attach_controller" 00:23:01.705 } 00:23:01.705 EOF 00:23:01.705 )") 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.705 { 00:23:01.705 "params": { 00:23:01.705 "name": "Nvme$subsystem", 00:23:01.705 "trtype": "$TEST_TRANSPORT", 00:23:01.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.705 "adrfam": "ipv4", 00:23:01.705 "trsvcid": "$NVMF_PORT", 00:23:01.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.705 "hdgst": ${hdgst:-false}, 00:23:01.705 "ddgst": ${ddgst:-false} 00:23:01.705 }, 00:23:01.705 "method": "bdev_nvme_attach_controller" 00:23:01.705 } 00:23:01.705 EOF 00:23:01.705 )") 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.705 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.705 { 00:23:01.705 "params": { 00:23:01.706 "name": "Nvme$subsystem", 00:23:01.706 "trtype": "$TEST_TRANSPORT", 00:23:01.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.706 "adrfam": "ipv4", 00:23:01.706 "trsvcid": "$NVMF_PORT", 00:23:01.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.706 "hdgst": ${hdgst:-false}, 00:23:01.706 "ddgst": ${ddgst:-false} 00:23:01.706 }, 00:23:01.706 "method": "bdev_nvme_attach_controller" 00:23:01.706 } 00:23:01.706 EOF 00:23:01.706 )") 00:23:01.706 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.706 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.706 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.706 { 00:23:01.706 "params": { 00:23:01.706 "name": "Nvme$subsystem", 00:23:01.706 "trtype": "$TEST_TRANSPORT", 00:23:01.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.706 "adrfam": "ipv4", 00:23:01.706 "trsvcid": "$NVMF_PORT", 00:23:01.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.706 "hdgst": ${hdgst:-false}, 00:23:01.706 "ddgst": ${ddgst:-false} 00:23:01.706 }, 00:23:01.706 "method": "bdev_nvme_attach_controller" 00:23:01.706 } 00:23:01.706 EOF 00:23:01.706 )") 00:23:01.706 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.706 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.706 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.706 { 00:23:01.706 "params": { 00:23:01.706 "name": "Nvme$subsystem", 00:23:01.706 "trtype": "$TEST_TRANSPORT", 00:23:01.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.706 "adrfam": "ipv4", 00:23:01.706 "trsvcid": "$NVMF_PORT", 00:23:01.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.706 "hdgst": ${hdgst:-false}, 00:23:01.706 "ddgst": ${ddgst:-false} 00:23:01.706 }, 00:23:01.706 "method": "bdev_nvme_attach_controller" 00:23:01.706 } 00:23:01.706 EOF 00:23:01.706 )") 00:23:01.706 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.706 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.706 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.706 { 00:23:01.706 "params": { 00:23:01.706 "name": "Nvme$subsystem", 00:23:01.706 "trtype": "$TEST_TRANSPORT", 00:23:01.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.706 "adrfam": "ipv4", 00:23:01.706 "trsvcid": "$NVMF_PORT", 00:23:01.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.706 "hdgst": ${hdgst:-false}, 00:23:01.706 "ddgst": ${ddgst:-false} 00:23:01.706 }, 00:23:01.706 "method": "bdev_nvme_attach_controller" 00:23:01.706 } 00:23:01.706 EOF 00:23:01.706 )") 00:23:01.706 22:20:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.706 22:20:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.706 22:20:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.706 { 00:23:01.706 "params": { 00:23:01.706 "name": "Nvme$subsystem", 00:23:01.706 "trtype": "$TEST_TRANSPORT", 00:23:01.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.706 "adrfam": "ipv4", 00:23:01.706 "trsvcid": "$NVMF_PORT", 00:23:01.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.706 "hdgst": ${hdgst:-false}, 00:23:01.706 "ddgst": ${ddgst:-false} 00:23:01.706 }, 00:23:01.706 "method": "bdev_nvme_attach_controller" 00:23:01.706 } 00:23:01.706 EOF 00:23:01.706 )") 00:23:01.706 22:20:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.706 [2024-07-15 22:20:27.008002] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:23:01.706 [2024-07-15 22:20:27.008071] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:01.706 22:20:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.706 22:20:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.706 { 00:23:01.706 "params": { 00:23:01.706 "name": "Nvme$subsystem", 00:23:01.706 "trtype": "$TEST_TRANSPORT", 00:23:01.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.706 "adrfam": "ipv4", 00:23:01.706 "trsvcid": "$NVMF_PORT", 00:23:01.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.706 "hdgst": ${hdgst:-false}, 00:23:01.706 "ddgst": ${ddgst:-false} 00:23:01.706 }, 00:23:01.706 "method": "bdev_nvme_attach_controller" 00:23:01.706 } 00:23:01.706 EOF 00:23:01.706 )") 00:23:01.706 22:20:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.706 22:20:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.706 22:20:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.706 { 00:23:01.706 "params": { 00:23:01.706 "name": "Nvme$subsystem", 00:23:01.706 "trtype": "$TEST_TRANSPORT", 00:23:01.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.706 "adrfam": "ipv4", 00:23:01.706 "trsvcid": "$NVMF_PORT", 00:23:01.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.706 "hdgst": ${hdgst:-false}, 00:23:01.706 "ddgst": ${ddgst:-false} 00:23:01.706 }, 00:23:01.706 "method": "bdev_nvme_attach_controller" 00:23:01.706 } 00:23:01.706 EOF 00:23:01.706 )") 00:23:01.706 22:20:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.706 22:20:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.706 22:20:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.706 { 00:23:01.706 "params": { 00:23:01.706 "name": "Nvme$subsystem", 00:23:01.706 "trtype": "$TEST_TRANSPORT", 00:23:01.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.706 "adrfam": "ipv4", 00:23:01.706 "trsvcid": "$NVMF_PORT", 00:23:01.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.706 "hdgst": ${hdgst:-false}, 00:23:01.706 "ddgst": ${ddgst:-false} 00:23:01.706 }, 00:23:01.706 "method": "bdev_nvme_attach_controller" 00:23:01.706 } 00:23:01.706 EOF 00:23:01.706 )") 00:23:01.706 22:20:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.968 22:20:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:01.968 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.968 22:20:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:01.968 22:20:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:01.968 "params": { 00:23:01.968 "name": "Nvme1", 00:23:01.968 "trtype": "tcp", 00:23:01.968 "traddr": "10.0.0.2", 00:23:01.968 "adrfam": "ipv4", 00:23:01.968 "trsvcid": "4420", 00:23:01.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.968 "hdgst": false, 00:23:01.968 "ddgst": false 00:23:01.968 }, 00:23:01.968 "method": "bdev_nvme_attach_controller" 00:23:01.968 },{ 00:23:01.968 "params": { 00:23:01.968 "name": "Nvme2", 00:23:01.968 "trtype": "tcp", 00:23:01.968 "traddr": "10.0.0.2", 00:23:01.968 "adrfam": "ipv4", 00:23:01.968 "trsvcid": "4420", 00:23:01.968 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:01.968 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:01.968 "hdgst": false, 00:23:01.968 "ddgst": false 00:23:01.968 }, 00:23:01.968 "method": "bdev_nvme_attach_controller" 00:23:01.968 },{ 00:23:01.968 "params": { 00:23:01.968 "name": "Nvme3", 00:23:01.968 "trtype": "tcp", 00:23:01.968 "traddr": "10.0.0.2", 00:23:01.968 "adrfam": "ipv4", 00:23:01.968 "trsvcid": "4420", 00:23:01.968 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:01.968 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:01.968 "hdgst": false, 00:23:01.968 "ddgst": false 00:23:01.968 }, 00:23:01.968 "method": "bdev_nvme_attach_controller" 00:23:01.968 },{ 00:23:01.968 "params": { 00:23:01.968 "name": "Nvme4", 00:23:01.968 "trtype": "tcp", 00:23:01.968 "traddr": "10.0.0.2", 00:23:01.968 "adrfam": "ipv4", 00:23:01.968 "trsvcid": "4420", 00:23:01.968 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:01.968 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:01.968 "hdgst": false, 00:23:01.968 "ddgst": false 00:23:01.968 }, 00:23:01.968 "method": "bdev_nvme_attach_controller" 00:23:01.968 },{ 00:23:01.968 "params": { 00:23:01.968 "name": "Nvme5", 00:23:01.968 "trtype": "tcp", 00:23:01.968 "traddr": "10.0.0.2", 00:23:01.968 "adrfam": "ipv4", 00:23:01.968 "trsvcid": "4420", 00:23:01.968 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:01.968 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:01.968 "hdgst": false, 00:23:01.968 "ddgst": false 00:23:01.968 }, 00:23:01.968 "method": "bdev_nvme_attach_controller" 00:23:01.968 },{ 00:23:01.968 "params": { 00:23:01.968 "name": "Nvme6", 00:23:01.968 "trtype": "tcp", 00:23:01.968 "traddr": "10.0.0.2", 00:23:01.968 "adrfam": "ipv4", 00:23:01.968 "trsvcid": "4420", 00:23:01.968 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:01.968 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:01.968 "hdgst": false, 00:23:01.968 "ddgst": false 00:23:01.968 }, 00:23:01.968 "method": "bdev_nvme_attach_controller" 00:23:01.968 },{ 00:23:01.968 "params": { 00:23:01.968 "name": "Nvme7", 00:23:01.968 "trtype": "tcp", 00:23:01.968 "traddr": "10.0.0.2", 00:23:01.968 "adrfam": "ipv4", 00:23:01.968 "trsvcid": "4420", 00:23:01.968 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:01.968 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:01.968 "hdgst": false, 00:23:01.968 "ddgst": false 00:23:01.968 }, 00:23:01.968 "method": "bdev_nvme_attach_controller" 00:23:01.968 },{ 00:23:01.968 "params": { 00:23:01.968 "name": "Nvme8", 00:23:01.968 "trtype": "tcp", 00:23:01.968 "traddr": "10.0.0.2", 00:23:01.968 "adrfam": "ipv4", 00:23:01.968 "trsvcid": "4420", 00:23:01.968 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:01.968 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:01.968 "hdgst": false, 00:23:01.968 "ddgst": false 00:23:01.968 }, 00:23:01.968 "method": "bdev_nvme_attach_controller" 00:23:01.968 },{ 00:23:01.968 "params": { 00:23:01.968 "name": "Nvme9", 00:23:01.968 "trtype": "tcp", 00:23:01.968 "traddr": "10.0.0.2", 00:23:01.968 "adrfam": "ipv4", 00:23:01.968 "trsvcid": "4420", 00:23:01.968 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:01.968 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:01.968 "hdgst": false, 00:23:01.968 "ddgst": false 00:23:01.968 }, 00:23:01.968 "method": "bdev_nvme_attach_controller" 00:23:01.968 },{ 00:23:01.968 "params": { 00:23:01.968 "name": "Nvme10", 00:23:01.968 "trtype": "tcp", 00:23:01.968 "traddr": "10.0.0.2", 00:23:01.968 "adrfam": "ipv4", 00:23:01.968 "trsvcid": "4420", 00:23:01.968 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:01.968 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:01.968 "hdgst": false, 00:23:01.968 "ddgst": false 00:23:01.968 }, 00:23:01.968 "method": "bdev_nvme_attach_controller" 00:23:01.968 }' 00:23:01.968 [2024-07-15 22:20:27.070029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.968 [2024-07-15 22:20:27.135416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.354 22:20:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.354 22:20:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:03.354 22:20:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:03.354 22:20:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.354 22:20:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:03.354 22:20:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.354 22:20:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2846143 00:23:03.354 22:20:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:03.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2846143 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:03.354 22:20:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2845910 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.296 { 00:23:04.296 "params": { 00:23:04.296 "name": "Nvme$subsystem", 00:23:04.296 "trtype": "$TEST_TRANSPORT", 00:23:04.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.296 "adrfam": "ipv4", 00:23:04.296 "trsvcid": "$NVMF_PORT", 00:23:04.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.296 "hdgst": ${hdgst:-false}, 00:23:04.296 "ddgst": ${ddgst:-false} 00:23:04.296 }, 00:23:04.296 "method": "bdev_nvme_attach_controller" 00:23:04.296 } 00:23:04.296 EOF 00:23:04.296 )") 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.296 { 00:23:04.296 "params": { 00:23:04.296 "name": "Nvme$subsystem", 00:23:04.296 "trtype": "$TEST_TRANSPORT", 00:23:04.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.296 "adrfam": "ipv4", 00:23:04.296 "trsvcid": "$NVMF_PORT", 00:23:04.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.296 "hdgst": ${hdgst:-false}, 00:23:04.296 "ddgst": ${ddgst:-false} 00:23:04.296 }, 00:23:04.296 "method": "bdev_nvme_attach_controller" 00:23:04.296 } 00:23:04.296 EOF 00:23:04.296 )") 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.296 { 00:23:04.296 "params": { 00:23:04.296 "name": "Nvme$subsystem", 00:23:04.296 "trtype": "$TEST_TRANSPORT", 00:23:04.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.296 "adrfam": "ipv4", 00:23:04.296 "trsvcid": "$NVMF_PORT", 00:23:04.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.296 "hdgst": ${hdgst:-false}, 00:23:04.296 "ddgst": ${ddgst:-false} 00:23:04.296 }, 00:23:04.296 "method": "bdev_nvme_attach_controller" 00:23:04.296 } 00:23:04.296 EOF 00:23:04.296 )") 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.296 { 00:23:04.296 "params": { 00:23:04.296 "name": "Nvme$subsystem", 00:23:04.296 "trtype": "$TEST_TRANSPORT", 00:23:04.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.296 "adrfam": "ipv4", 00:23:04.296 "trsvcid": "$NVMF_PORT", 00:23:04.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.296 "hdgst": ${hdgst:-false}, 00:23:04.296 "ddgst": ${ddgst:-false} 00:23:04.296 }, 00:23:04.296 "method": "bdev_nvme_attach_controller" 00:23:04.296 } 00:23:04.296 EOF 00:23:04.296 )") 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.296 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.297 { 00:23:04.297 "params": { 00:23:04.297 "name": "Nvme$subsystem", 00:23:04.297 "trtype": "$TEST_TRANSPORT", 00:23:04.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.297 "adrfam": "ipv4", 00:23:04.297 "trsvcid": "$NVMF_PORT", 00:23:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.297 "hdgst": ${hdgst:-false}, 00:23:04.297 "ddgst": ${ddgst:-false} 00:23:04.297 }, 00:23:04.297 "method": "bdev_nvme_attach_controller" 00:23:04.297 } 00:23:04.297 EOF 00:23:04.297 )") 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.297 { 00:23:04.297 "params": { 00:23:04.297 "name": "Nvme$subsystem", 00:23:04.297 "trtype": "$TEST_TRANSPORT", 00:23:04.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.297 "adrfam": "ipv4", 00:23:04.297 "trsvcid": "$NVMF_PORT", 00:23:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.297 "hdgst": ${hdgst:-false}, 00:23:04.297 "ddgst": ${ddgst:-false} 00:23:04.297 }, 00:23:04.297 "method": "bdev_nvme_attach_controller" 00:23:04.297 } 00:23:04.297 EOF 00:23:04.297 )") 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.297 { 00:23:04.297 "params": { 00:23:04.297 "name": "Nvme$subsystem", 00:23:04.297 "trtype": "$TEST_TRANSPORT", 00:23:04.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.297 "adrfam": "ipv4", 00:23:04.297 "trsvcid": "$NVMF_PORT", 00:23:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.297 "hdgst": ${hdgst:-false}, 00:23:04.297 "ddgst": ${ddgst:-false} 00:23:04.297 }, 00:23:04.297 "method": "bdev_nvme_attach_controller" 00:23:04.297 } 00:23:04.297 EOF 00:23:04.297 )") 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.297 [2024-07-15 22:20:29.578274] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:23:04.297 [2024-07-15 22:20:29.578329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846805 ] 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.297 { 00:23:04.297 "params": { 00:23:04.297 "name": "Nvme$subsystem", 00:23:04.297 "trtype": "$TEST_TRANSPORT", 00:23:04.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.297 "adrfam": "ipv4", 00:23:04.297 "trsvcid": "$NVMF_PORT", 00:23:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.297 "hdgst": ${hdgst:-false}, 00:23:04.297 "ddgst": ${ddgst:-false} 00:23:04.297 }, 00:23:04.297 "method": "bdev_nvme_attach_controller" 00:23:04.297 } 00:23:04.297 EOF 00:23:04.297 )") 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.297 { 00:23:04.297 "params": { 00:23:04.297 "name": "Nvme$subsystem", 00:23:04.297 "trtype": "$TEST_TRANSPORT", 00:23:04.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.297 "adrfam": "ipv4", 00:23:04.297 "trsvcid": "$NVMF_PORT", 00:23:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.297 "hdgst": ${hdgst:-false}, 00:23:04.297 "ddgst": ${ddgst:-false} 00:23:04.297 }, 00:23:04.297 "method": "bdev_nvme_attach_controller" 00:23:04.297 } 00:23:04.297 EOF 00:23:04.297 )") 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.297 { 00:23:04.297 "params": { 00:23:04.297 "name": "Nvme$subsystem", 00:23:04.297 "trtype": "$TEST_TRANSPORT", 00:23:04.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.297 "adrfam": "ipv4", 00:23:04.297 "trsvcid": "$NVMF_PORT", 00:23:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.297 "hdgst": ${hdgst:-false}, 00:23:04.297 "ddgst": ${ddgst:-false} 00:23:04.297 }, 00:23:04.297 "method": "bdev_nvme_attach_controller" 00:23:04.297 } 00:23:04.297 EOF 00:23:04.297 )") 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.297 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:04.297 22:20:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:04.297 "params": { 00:23:04.297 "name": "Nvme1", 00:23:04.297 "trtype": "tcp", 00:23:04.297 "traddr": "10.0.0.2", 00:23:04.297 "adrfam": "ipv4", 00:23:04.297 "trsvcid": "4420", 00:23:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.297 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:04.297 "hdgst": false, 00:23:04.297 "ddgst": false 00:23:04.297 }, 00:23:04.297 "method": "bdev_nvme_attach_controller" 00:23:04.297 },{ 00:23:04.297 "params": { 00:23:04.297 "name": "Nvme2", 00:23:04.297 "trtype": "tcp", 00:23:04.297 "traddr": "10.0.0.2", 00:23:04.297 "adrfam": "ipv4", 00:23:04.297 "trsvcid": "4420", 00:23:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:04.297 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:04.297 "hdgst": false, 00:23:04.297 "ddgst": false 00:23:04.297 }, 00:23:04.297 "method": "bdev_nvme_attach_controller" 00:23:04.297 },{ 00:23:04.297 "params": { 00:23:04.297 "name": "Nvme3", 00:23:04.297 "trtype": "tcp", 00:23:04.297 "traddr": "10.0.0.2", 00:23:04.297 "adrfam": "ipv4", 00:23:04.297 "trsvcid": "4420", 00:23:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:04.297 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:04.297 "hdgst": false, 00:23:04.297 "ddgst": false 00:23:04.297 }, 00:23:04.297 "method": "bdev_nvme_attach_controller" 00:23:04.297 },{ 00:23:04.297 "params": { 00:23:04.297 "name": "Nvme4", 00:23:04.297 "trtype": "tcp", 00:23:04.297 "traddr": "10.0.0.2", 00:23:04.297 "adrfam": "ipv4", 00:23:04.297 "trsvcid": "4420", 00:23:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:04.297 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:04.297 "hdgst": false, 00:23:04.297 "ddgst": false 00:23:04.297 }, 00:23:04.297 "method": "bdev_nvme_attach_controller" 00:23:04.297 },{ 00:23:04.297 "params": { 00:23:04.297 "name": "Nvme5", 00:23:04.297 "trtype": "tcp", 00:23:04.297 "traddr": "10.0.0.2", 00:23:04.297 "adrfam": "ipv4", 00:23:04.297 "trsvcid": "4420", 00:23:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:04.297 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:04.297 "hdgst": false, 00:23:04.297 "ddgst": false 00:23:04.297 }, 00:23:04.297 "method": "bdev_nvme_attach_controller" 00:23:04.297 },{ 00:23:04.297 "params": { 00:23:04.297 "name": "Nvme6", 00:23:04.297 "trtype": "tcp", 00:23:04.297 "traddr": "10.0.0.2", 00:23:04.297 "adrfam": "ipv4", 00:23:04.297 "trsvcid": "4420", 00:23:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:04.297 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:04.297 "hdgst": false, 00:23:04.297 "ddgst": false 00:23:04.297 }, 00:23:04.297 "method": "bdev_nvme_attach_controller" 00:23:04.297 },{ 00:23:04.297 "params": { 00:23:04.297 "name": "Nvme7", 00:23:04.297 "trtype": "tcp", 00:23:04.297 "traddr": "10.0.0.2", 00:23:04.297 "adrfam": "ipv4", 00:23:04.297 "trsvcid": "4420", 00:23:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:04.297 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:04.297 "hdgst": false, 00:23:04.297 "ddgst": false 00:23:04.297 }, 00:23:04.297 "method": "bdev_nvme_attach_controller" 00:23:04.297 },{ 00:23:04.297 "params": { 00:23:04.297 "name": "Nvme8", 00:23:04.297 "trtype": "tcp", 00:23:04.297 "traddr": "10.0.0.2", 00:23:04.297 "adrfam": "ipv4", 00:23:04.297 "trsvcid": "4420", 00:23:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:04.297 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:04.297 "hdgst": false, 00:23:04.297 "ddgst": false 00:23:04.297 }, 00:23:04.297 "method": "bdev_nvme_attach_controller" 00:23:04.297 },{ 00:23:04.297 "params": { 00:23:04.297 "name": "Nvme9", 00:23:04.297 "trtype": "tcp", 00:23:04.297 "traddr": "10.0.0.2", 00:23:04.297 "adrfam": "ipv4", 00:23:04.297 "trsvcid": "4420", 00:23:04.297 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:04.297 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:04.297 "hdgst": false, 00:23:04.297 "ddgst": false 00:23:04.297 }, 00:23:04.297 "method": "bdev_nvme_attach_controller" 00:23:04.297 },{ 00:23:04.297 "params": { 00:23:04.297 "name": "Nvme10", 00:23:04.297 "trtype": "tcp", 00:23:04.297 "traddr": "10.0.0.2", 00:23:04.297 "adrfam": "ipv4", 00:23:04.297 "trsvcid": "4420", 00:23:04.298 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:04.298 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:04.298 "hdgst": false, 00:23:04.298 "ddgst": false 00:23:04.298 }, 00:23:04.298 "method": "bdev_nvme_attach_controller" 00:23:04.298 }' 00:23:04.558 [2024-07-15 22:20:29.638847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.558 [2024-07-15 22:20:29.703481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.943 Running I/O for 1 seconds... 00:23:06.915 00:23:06.915 Latency(us) 00:23:06.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.915 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.915 Verification LBA range: start 0x0 length 0x400 00:23:06.915 Nvme1n1 : 1.16 276.79 17.30 0.00 0.00 228847.79 12178.77 239424.85 00:23:06.915 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.915 Verification LBA range: start 0x0 length 0x400 00:23:06.915 Nvme2n1 : 1.06 241.58 15.10 0.00 0.00 257406.93 23046.83 241172.48 00:23:06.915 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.915 Verification LBA range: start 0x0 length 0x400 00:23:06.915 Nvme3n1 : 1.15 222.88 13.93 0.00 0.00 270914.13 21408.43 228939.09 00:23:06.915 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.915 Verification LBA range: start 0x0 length 0x400 00:23:06.915 Nvme4n1 : 1.12 228.00 14.25 0.00 0.00 263496.53 22173.01 265639.25 00:23:06.915 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.915 Verification LBA range: start 0x0 length 0x400 00:23:06.915 Nvme5n1 : 1.17 218.72 13.67 0.00 0.00 270819.63 22719.15 270882.13 00:23:06.915 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.915 Verification LBA range: start 0x0 length 0x400 00:23:06.915 Nvme6n1 : 1.18 271.82 16.99 0.00 0.00 214091.95 21517.65 230686.72 00:23:06.915 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.915 Verification LBA range: start 0x0 length 0x400 00:23:06.915 Nvme7n1 : 1.18 270.76 16.92 0.00 0.00 211231.57 19223.89 211462.83 00:23:06.915 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.915 Verification LBA range: start 0x0 length 0x400 00:23:06.915 Nvme8n1 : 1.19 267.94 16.75 0.00 0.00 210106.71 14745.60 248162.99 00:23:06.915 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.915 Verification LBA range: start 0x0 length 0x400 00:23:06.915 Nvme9n1 : 1.16 220.46 13.78 0.00 0.00 249777.71 21517.65 248162.99 00:23:06.915 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.915 Verification LBA range: start 0x0 length 0x400 00:23:06.915 Nvme10n1 : 1.17 217.96 13.62 0.00 0.00 248427.52 23046.83 272629.76 00:23:06.915 =================================================================================================================== 00:23:06.915 Total : 2436.93 152.31 0.00 0.00 240108.18 12178.77 272629.76 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.176 rmmod nvme_tcp 00:23:07.176 rmmod nvme_fabrics 00:23:07.176 rmmod nvme_keyring 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2845910 ']' 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2845910 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2845910 ']' 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2845910 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2845910 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2845910' 00:23:07.176 killing process with pid 2845910 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2845910 00:23:07.176 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2845910 00:23:07.437 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:07.437 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:07.437 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:07.437 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.437 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:07.437 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.437 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.437 22:20:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:09.976 00:23:09.976 real 0m16.386s 00:23:09.976 user 0m33.646s 00:23:09.976 sys 0m6.467s 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.976 ************************************ 00:23:09.976 END TEST nvmf_shutdown_tc1 00:23:09.976 ************************************ 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:09.976 ************************************ 00:23:09.976 START TEST nvmf_shutdown_tc2 00:23:09.976 ************************************ 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.976 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:09.977 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:09.977 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:09.977 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:09.977 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.977 22:20:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:09.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:23:09.977 00:23:09.977 --- 10.0.0.2 ping statistics --- 00:23:09.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.977 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:23:09.977 00:23:09.977 --- 10.0.0.1 ping statistics --- 00:23:09.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.977 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:09.977 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:09.978 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:09.978 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.978 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2847923 00:23:09.978 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2847923 00:23:09.978 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:09.978 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2847923 ']' 00:23:09.978 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.978 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.978 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.978 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.978 22:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.978 [2024-07-15 22:20:35.299152] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:23:09.978 [2024-07-15 22:20:35.299238] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.238 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.238 [2024-07-15 22:20:35.386671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:10.238 [2024-07-15 22:20:35.448305] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.238 [2024-07-15 22:20:35.448337] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.238 [2024-07-15 22:20:35.448343] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.238 [2024-07-15 22:20:35.448347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.238 [2024-07-15 22:20:35.448351] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.238 [2024-07-15 22:20:35.448462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.238 [2024-07-15 22:20:35.448620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:10.238 [2024-07-15 22:20:35.448776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.238 [2024-07-15 22:20:35.448778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:10.807 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.807 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:10.807 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:10.807 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:10.807 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.807 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.807 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:10.807 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.807 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.807 [2024-07-15 22:20:36.126394] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.066 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.066 Malloc1 00:23:11.066 [2024-07-15 22:20:36.225141] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.066 Malloc2 00:23:11.066 Malloc3 00:23:11.066 Malloc4 00:23:11.066 Malloc5 00:23:11.327 Malloc6 00:23:11.327 Malloc7 00:23:11.327 Malloc8 00:23:11.327 Malloc9 00:23:11.327 Malloc10 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2848301 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2848301 /var/tmp/bdevperf.sock 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2848301 ']' 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.327 { 00:23:11.327 "params": { 00:23:11.327 "name": "Nvme$subsystem", 00:23:11.327 "trtype": "$TEST_TRANSPORT", 00:23:11.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.327 "adrfam": "ipv4", 00:23:11.327 "trsvcid": "$NVMF_PORT", 00:23:11.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.327 "hdgst": ${hdgst:-false}, 00:23:11.327 "ddgst": ${ddgst:-false} 00:23:11.327 }, 00:23:11.327 "method": "bdev_nvme_attach_controller" 00:23:11.327 } 00:23:11.327 EOF 00:23:11.327 )") 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.327 { 00:23:11.327 "params": { 00:23:11.327 "name": "Nvme$subsystem", 00:23:11.327 "trtype": "$TEST_TRANSPORT", 00:23:11.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.327 "adrfam": "ipv4", 00:23:11.327 "trsvcid": "$NVMF_PORT", 00:23:11.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.327 "hdgst": ${hdgst:-false}, 00:23:11.327 "ddgst": ${ddgst:-false} 00:23:11.327 }, 00:23:11.327 "method": "bdev_nvme_attach_controller" 00:23:11.327 } 00:23:11.327 EOF 00:23:11.327 )") 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.327 { 00:23:11.327 "params": { 00:23:11.327 "name": "Nvme$subsystem", 00:23:11.327 "trtype": "$TEST_TRANSPORT", 00:23:11.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.327 "adrfam": "ipv4", 00:23:11.327 "trsvcid": "$NVMF_PORT", 00:23:11.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.327 "hdgst": ${hdgst:-false}, 00:23:11.327 "ddgst": ${ddgst:-false} 00:23:11.327 }, 00:23:11.327 "method": "bdev_nvme_attach_controller" 00:23:11.327 } 00:23:11.327 EOF 00:23:11.327 )") 00:23:11.327 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.328 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.328 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.328 { 00:23:11.328 "params": { 00:23:11.328 "name": "Nvme$subsystem", 00:23:11.328 "trtype": "$TEST_TRANSPORT", 00:23:11.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.328 "adrfam": "ipv4", 00:23:11.328 "trsvcid": "$NVMF_PORT", 00:23:11.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.328 "hdgst": ${hdgst:-false}, 00:23:11.328 "ddgst": ${ddgst:-false} 00:23:11.328 }, 00:23:11.328 "method": "bdev_nvme_attach_controller" 00:23:11.328 } 00:23:11.328 EOF 00:23:11.328 )") 00:23:11.328 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.589 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.589 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.589 { 00:23:11.589 "params": { 00:23:11.589 "name": "Nvme$subsystem", 00:23:11.589 "trtype": "$TEST_TRANSPORT", 00:23:11.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.589 "adrfam": "ipv4", 00:23:11.589 "trsvcid": "$NVMF_PORT", 00:23:11.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.589 "hdgst": ${hdgst:-false}, 00:23:11.589 "ddgst": ${ddgst:-false} 00:23:11.589 }, 00:23:11.589 "method": "bdev_nvme_attach_controller" 00:23:11.589 } 00:23:11.589 EOF 00:23:11.589 )") 00:23:11.589 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.589 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.589 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.589 { 00:23:11.589 "params": { 00:23:11.589 "name": "Nvme$subsystem", 00:23:11.589 "trtype": "$TEST_TRANSPORT", 00:23:11.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.589 "adrfam": "ipv4", 00:23:11.589 "trsvcid": "$NVMF_PORT", 00:23:11.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.589 "hdgst": ${hdgst:-false}, 00:23:11.589 "ddgst": ${ddgst:-false} 00:23:11.589 }, 00:23:11.589 "method": "bdev_nvme_attach_controller" 00:23:11.589 } 00:23:11.589 EOF 00:23:11.589 )") 00:23:11.589 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.589 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.589 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.589 { 00:23:11.589 "params": { 00:23:11.589 "name": "Nvme$subsystem", 00:23:11.589 "trtype": "$TEST_TRANSPORT", 00:23:11.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.589 "adrfam": "ipv4", 00:23:11.589 "trsvcid": "$NVMF_PORT", 00:23:11.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.589 "hdgst": ${hdgst:-false}, 00:23:11.589 "ddgst": ${ddgst:-false} 00:23:11.589 }, 00:23:11.589 "method": "bdev_nvme_attach_controller" 00:23:11.589 } 00:23:11.589 EOF 00:23:11.589 )") 00:23:11.589 [2024-07-15 22:20:36.670190] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:23:11.589 [2024-07-15 22:20:36.670248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848301 ] 00:23:11.589 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.589 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.589 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.589 { 00:23:11.589 "params": { 00:23:11.589 "name": "Nvme$subsystem", 00:23:11.589 "trtype": "$TEST_TRANSPORT", 00:23:11.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.589 "adrfam": "ipv4", 00:23:11.589 "trsvcid": "$NVMF_PORT", 00:23:11.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.589 "hdgst": ${hdgst:-false}, 00:23:11.589 "ddgst": ${ddgst:-false} 00:23:11.589 }, 00:23:11.589 "method": "bdev_nvme_attach_controller" 00:23:11.589 } 00:23:11.589 EOF 00:23:11.589 )") 00:23:11.589 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.589 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.589 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.589 { 00:23:11.589 "params": { 00:23:11.589 "name": "Nvme$subsystem", 00:23:11.589 "trtype": "$TEST_TRANSPORT", 00:23:11.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.589 "adrfam": "ipv4", 00:23:11.589 "trsvcid": "$NVMF_PORT", 00:23:11.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.589 "hdgst": ${hdgst:-false}, 00:23:11.589 "ddgst": ${ddgst:-false} 00:23:11.589 }, 00:23:11.590 "method": "bdev_nvme_attach_controller" 00:23:11.590 } 00:23:11.590 EOF 00:23:11.590 )") 00:23:11.590 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.590 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.590 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.590 { 00:23:11.590 "params": { 00:23:11.590 "name": "Nvme$subsystem", 00:23:11.590 "trtype": "$TEST_TRANSPORT", 00:23:11.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.590 "adrfam": "ipv4", 00:23:11.590 "trsvcid": "$NVMF_PORT", 00:23:11.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.590 "hdgst": ${hdgst:-false}, 00:23:11.590 "ddgst": ${ddgst:-false} 00:23:11.590 }, 00:23:11.590 "method": "bdev_nvme_attach_controller" 00:23:11.590 } 00:23:11.590 EOF 00:23:11.590 )") 00:23:11.590 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.590 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.590 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:11.590 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:11.590 22:20:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:11.590 "params": { 00:23:11.590 "name": "Nvme1", 00:23:11.590 "trtype": "tcp", 00:23:11.590 "traddr": "10.0.0.2", 00:23:11.590 "adrfam": "ipv4", 00:23:11.590 "trsvcid": "4420", 00:23:11.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.590 "hdgst": false, 00:23:11.590 "ddgst": false 00:23:11.590 }, 00:23:11.590 "method": "bdev_nvme_attach_controller" 00:23:11.590 },{ 00:23:11.590 "params": { 00:23:11.590 "name": "Nvme2", 00:23:11.590 "trtype": "tcp", 00:23:11.590 "traddr": "10.0.0.2", 00:23:11.590 "adrfam": "ipv4", 00:23:11.590 "trsvcid": "4420", 00:23:11.590 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:11.590 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:11.590 "hdgst": false, 00:23:11.590 "ddgst": false 00:23:11.590 }, 00:23:11.590 "method": "bdev_nvme_attach_controller" 00:23:11.590 },{ 00:23:11.590 "params": { 00:23:11.590 "name": "Nvme3", 00:23:11.590 "trtype": "tcp", 00:23:11.590 "traddr": "10.0.0.2", 00:23:11.590 "adrfam": "ipv4", 00:23:11.590 "trsvcid": "4420", 00:23:11.590 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:11.590 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:11.590 "hdgst": false, 00:23:11.590 "ddgst": false 00:23:11.590 }, 00:23:11.590 "method": "bdev_nvme_attach_controller" 00:23:11.590 },{ 00:23:11.590 "params": { 00:23:11.590 "name": "Nvme4", 00:23:11.590 "trtype": "tcp", 00:23:11.590 "traddr": "10.0.0.2", 00:23:11.590 "adrfam": "ipv4", 00:23:11.590 "trsvcid": "4420", 00:23:11.590 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:11.590 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:11.590 "hdgst": false, 00:23:11.590 "ddgst": false 00:23:11.590 }, 00:23:11.590 "method": "bdev_nvme_attach_controller" 00:23:11.590 },{ 00:23:11.590 "params": { 00:23:11.590 "name": "Nvme5", 00:23:11.590 "trtype": "tcp", 00:23:11.590 "traddr": "10.0.0.2", 00:23:11.590 "adrfam": "ipv4", 00:23:11.590 "trsvcid": "4420", 00:23:11.590 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:11.590 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:11.590 "hdgst": false, 00:23:11.590 "ddgst": false 00:23:11.590 }, 00:23:11.590 "method": "bdev_nvme_attach_controller" 00:23:11.590 },{ 00:23:11.590 "params": { 00:23:11.590 "name": "Nvme6", 00:23:11.590 "trtype": "tcp", 00:23:11.590 "traddr": "10.0.0.2", 00:23:11.590 "adrfam": "ipv4", 00:23:11.590 "trsvcid": "4420", 00:23:11.590 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:11.590 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:11.590 "hdgst": false, 00:23:11.590 "ddgst": false 00:23:11.590 }, 00:23:11.590 "method": "bdev_nvme_attach_controller" 00:23:11.590 },{ 00:23:11.590 "params": { 00:23:11.590 "name": "Nvme7", 00:23:11.590 "trtype": "tcp", 00:23:11.590 "traddr": "10.0.0.2", 00:23:11.590 "adrfam": "ipv4", 00:23:11.590 "trsvcid": "4420", 00:23:11.590 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:11.590 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:11.590 "hdgst": false, 00:23:11.590 "ddgst": false 00:23:11.590 }, 00:23:11.590 "method": "bdev_nvme_attach_controller" 00:23:11.590 },{ 00:23:11.590 "params": { 00:23:11.590 "name": "Nvme8", 00:23:11.590 "trtype": "tcp", 00:23:11.590 "traddr": "10.0.0.2", 00:23:11.590 "adrfam": "ipv4", 00:23:11.590 "trsvcid": "4420", 00:23:11.590 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:11.590 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:11.590 "hdgst": false, 00:23:11.590 "ddgst": false 00:23:11.590 }, 00:23:11.590 "method": "bdev_nvme_attach_controller" 00:23:11.590 },{ 00:23:11.590 "params": { 00:23:11.590 "name": "Nvme9", 00:23:11.590 "trtype": "tcp", 00:23:11.590 "traddr": "10.0.0.2", 00:23:11.590 "adrfam": "ipv4", 00:23:11.590 "trsvcid": "4420", 00:23:11.590 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:11.590 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:11.590 "hdgst": false, 00:23:11.590 "ddgst": false 00:23:11.590 }, 00:23:11.590 "method": "bdev_nvme_attach_controller" 00:23:11.590 },{ 00:23:11.590 "params": { 00:23:11.590 "name": "Nvme10", 00:23:11.590 "trtype": "tcp", 00:23:11.590 "traddr": "10.0.0.2", 00:23:11.590 "adrfam": "ipv4", 00:23:11.590 "trsvcid": "4420", 00:23:11.590 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:11.590 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:11.590 "hdgst": false, 00:23:11.590 "ddgst": false 00:23:11.590 }, 00:23:11.590 "method": "bdev_nvme_attach_controller" 00:23:11.590 }' 00:23:11.590 [2024-07-15 22:20:36.729812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.590 [2024-07-15 22:20:36.794640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.974 Running I/O for 10 seconds... 00:23:12.974 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.975 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:12.975 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:12.975 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.975 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.975 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.975 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:12.975 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:12.975 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:13.238 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:13.238 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:13.238 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:13.238 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:13.238 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:13.238 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:13.238 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.238 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.238 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.238 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:13.238 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:13.238 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:13.498 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:13.498 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:13.498 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:13.498 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:13.498 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.498 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.498 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.498 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:13.498 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:13.498 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:13.759 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:13.759 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:13.759 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:13.759 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:13.759 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.760 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.760 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.760 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:13.760 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:13.760 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:13.760 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:13.760 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:13.760 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2848301 00:23:13.760 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2848301 ']' 00:23:13.760 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2848301 00:23:13.760 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:13.760 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:13.760 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2848301 00:23:13.760 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:13.760 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:13.760 22:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2848301' 00:23:13.760 killing process with pid 2848301 00:23:13.760 22:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2848301 00:23:13.760 22:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2848301 00:23:14.021 Received shutdown signal, test time was about 0.975379 seconds 00:23:14.021 00:23:14.021 Latency(us) 00:23:14.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.021 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.021 Verification LBA range: start 0x0 length 0x400 00:23:14.021 Nvme1n1 : 0.95 203.08 12.69 0.00 0.00 311365.12 23920.64 249910.61 00:23:14.021 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.021 Verification LBA range: start 0x0 length 0x400 00:23:14.021 Nvme2n1 : 0.96 200.81 12.55 0.00 0.00 308188.73 42379.95 263891.63 00:23:14.021 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.021 Verification LBA range: start 0x0 length 0x400 00:23:14.021 Nvme3n1 : 0.96 266.08 16.63 0.00 0.00 227875.20 23592.96 251658.24 00:23:14.021 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.021 Verification LBA range: start 0x0 length 0x400 00:23:14.021 Nvme4n1 : 0.97 260.65 16.29 0.00 0.00 227556.77 21845.33 249910.61 00:23:14.021 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.021 Verification LBA range: start 0x0 length 0x400 00:23:14.021 Nvme5n1 : 0.97 268.32 16.77 0.00 0.00 206526.94 5406.72 241172.48 00:23:14.021 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.021 Verification LBA range: start 0x0 length 0x400 00:23:14.021 Nvme6n1 : 0.93 205.48 12.84 0.00 0.00 275218.77 23265.28 251658.24 00:23:14.021 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.021 Verification LBA range: start 0x0 length 0x400 00:23:14.021 Nvme7n1 : 0.95 202.22 12.64 0.00 0.00 274097.49 63788.37 218453.33 00:23:14.021 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.021 Verification LBA range: start 0x0 length 0x400 00:23:14.021 Nvme8n1 : 0.96 200.05 12.50 0.00 0.00 271078.12 24576.00 283115.52 00:23:14.021 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.021 Verification LBA range: start 0x0 length 0x400 00:23:14.021 Nvme9n1 : 0.94 271.46 16.97 0.00 0.00 194195.41 20534.61 249910.61 00:23:14.021 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.021 Verification LBA range: start 0x0 length 0x400 00:23:14.021 Nvme10n1 : 0.96 267.43 16.71 0.00 0.00 192601.17 22173.01 246415.36 00:23:14.021 =================================================================================================================== 00:23:14.021 Total : 2345.58 146.60 0.00 0.00 243230.36 5406.72 283115.52 00:23:14.021 22:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:14.963 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2847923 00:23:14.963 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:14.963 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:14.963 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:14.963 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:14.963 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:14.963 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:14.963 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:14.963 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:14.963 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:14.963 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:14.963 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:14.963 rmmod nvme_tcp 00:23:14.963 rmmod nvme_fabrics 00:23:15.223 rmmod nvme_keyring 00:23:15.223 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:15.223 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:15.223 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:15.223 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2847923 ']' 00:23:15.223 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2847923 00:23:15.224 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2847923 ']' 00:23:15.224 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2847923 00:23:15.224 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:15.224 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.224 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2847923 00:23:15.224 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:15.224 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:15.224 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2847923' 00:23:15.224 killing process with pid 2847923 00:23:15.224 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2847923 00:23:15.224 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2847923 00:23:15.484 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:15.484 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:15.484 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:15.484 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:15.484 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:15.484 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.484 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.484 22:20:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.398 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:17.398 00:23:17.398 real 0m7.814s 00:23:17.398 user 0m23.323s 00:23:17.398 sys 0m1.275s 00:23:17.398 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:17.398 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:17.398 ************************************ 00:23:17.398 END TEST nvmf_shutdown_tc2 00:23:17.398 ************************************ 00:23:17.398 22:20:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:17.398 22:20:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:17.398 22:20:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:17.398 22:20:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.398 22:20:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:17.660 ************************************ 00:23:17.660 START TEST nvmf_shutdown_tc3 00:23:17.660 ************************************ 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.660 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:17.661 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:17.661 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:17.661 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:17.661 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:17.661 22:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:17.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.520 ms 00:23:17.923 00:23:17.923 --- 10.0.0.2 ping statistics --- 00:23:17.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.923 rtt min/avg/max/mdev = 0.520/0.520/0.520/0.000 ms 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.442 ms 00:23:17.923 00:23:17.923 --- 10.0.0.1 ping statistics --- 00:23:17.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.923 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2849661 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2849661 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2849661 ']' 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.923 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:17.923 [2024-07-15 22:20:43.233789] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:23:17.923 [2024-07-15 22:20:43.233859] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.183 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.183 [2024-07-15 22:20:43.322358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:18.183 [2024-07-15 22:20:43.383796] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.183 [2024-07-15 22:20:43.383831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.183 [2024-07-15 22:20:43.383836] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.183 [2024-07-15 22:20:43.383841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.183 [2024-07-15 22:20:43.383845] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.183 [2024-07-15 22:20:43.383955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.183 [2024-07-15 22:20:43.384081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.183 [2024-07-15 22:20:43.384211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:18.183 [2024-07-15 22:20:43.384343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.753 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.753 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:18.753 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.753 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:18.753 22:20:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.753 [2024-07-15 22:20:44.038442] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.753 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.013 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.013 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.013 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.013 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.013 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.013 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.013 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.013 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.013 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.013 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.013 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:19.013 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.013 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.013 Malloc1 00:23:19.013 [2024-07-15 22:20:44.137191] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.013 Malloc2 00:23:19.013 Malloc3 00:23:19.013 Malloc4 00:23:19.013 Malloc5 00:23:19.013 Malloc6 00:23:19.275 Malloc7 00:23:19.275 Malloc8 00:23:19.275 Malloc9 00:23:19.275 Malloc10 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2849869 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2849869 /var/tmp/bdevperf.sock 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2849869 ']' 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.275 { 00:23:19.275 "params": { 00:23:19.275 "name": "Nvme$subsystem", 00:23:19.275 "trtype": "$TEST_TRANSPORT", 00:23:19.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.275 "adrfam": "ipv4", 00:23:19.275 "trsvcid": "$NVMF_PORT", 00:23:19.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.275 "hdgst": ${hdgst:-false}, 00:23:19.275 "ddgst": ${ddgst:-false} 00:23:19.275 }, 00:23:19.275 "method": "bdev_nvme_attach_controller" 00:23:19.275 } 00:23:19.275 EOF 00:23:19.275 )") 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.275 { 00:23:19.275 "params": { 00:23:19.275 "name": "Nvme$subsystem", 00:23:19.275 "trtype": "$TEST_TRANSPORT", 00:23:19.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.275 "adrfam": "ipv4", 00:23:19.275 "trsvcid": "$NVMF_PORT", 00:23:19.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.275 "hdgst": ${hdgst:-false}, 00:23:19.275 "ddgst": ${ddgst:-false} 00:23:19.275 }, 00:23:19.275 "method": "bdev_nvme_attach_controller" 00:23:19.275 } 00:23:19.275 EOF 00:23:19.275 )") 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.275 { 00:23:19.275 "params": { 00:23:19.275 "name": "Nvme$subsystem", 00:23:19.275 "trtype": "$TEST_TRANSPORT", 00:23:19.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.275 "adrfam": "ipv4", 00:23:19.275 "trsvcid": "$NVMF_PORT", 00:23:19.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.275 "hdgst": ${hdgst:-false}, 00:23:19.275 "ddgst": ${ddgst:-false} 00:23:19.275 }, 00:23:19.275 "method": "bdev_nvme_attach_controller" 00:23:19.275 } 00:23:19.275 EOF 00:23:19.275 )") 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.275 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.275 { 00:23:19.275 "params": { 00:23:19.276 "name": "Nvme$subsystem", 00:23:19.276 "trtype": "$TEST_TRANSPORT", 00:23:19.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.276 "adrfam": "ipv4", 00:23:19.276 "trsvcid": "$NVMF_PORT", 00:23:19.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.276 "hdgst": ${hdgst:-false}, 00:23:19.276 "ddgst": ${ddgst:-false} 00:23:19.276 }, 00:23:19.276 "method": "bdev_nvme_attach_controller" 00:23:19.276 } 00:23:19.276 EOF 00:23:19.276 )") 00:23:19.276 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.276 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.276 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.276 { 00:23:19.276 "params": { 00:23:19.276 "name": "Nvme$subsystem", 00:23:19.276 "trtype": "$TEST_TRANSPORT", 00:23:19.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.276 "adrfam": "ipv4", 00:23:19.276 "trsvcid": "$NVMF_PORT", 00:23:19.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.276 "hdgst": ${hdgst:-false}, 00:23:19.276 "ddgst": ${ddgst:-false} 00:23:19.276 }, 00:23:19.276 "method": "bdev_nvme_attach_controller" 00:23:19.276 } 00:23:19.276 EOF 00:23:19.276 )") 00:23:19.276 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.276 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.276 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.276 { 00:23:19.276 "params": { 00:23:19.276 "name": "Nvme$subsystem", 00:23:19.276 "trtype": "$TEST_TRANSPORT", 00:23:19.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.276 "adrfam": "ipv4", 00:23:19.276 "trsvcid": "$NVMF_PORT", 00:23:19.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.276 "hdgst": ${hdgst:-false}, 00:23:19.276 "ddgst": ${ddgst:-false} 00:23:19.276 }, 00:23:19.276 "method": "bdev_nvme_attach_controller" 00:23:19.276 } 00:23:19.276 EOF 00:23:19.276 )") 00:23:19.276 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.276 [2024-07-15 22:20:44.578370] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:23:19.276 [2024-07-15 22:20:44.578425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2849869 ] 00:23:19.276 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.276 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.276 { 00:23:19.276 "params": { 00:23:19.276 "name": "Nvme$subsystem", 00:23:19.276 "trtype": "$TEST_TRANSPORT", 00:23:19.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.276 "adrfam": "ipv4", 00:23:19.276 "trsvcid": "$NVMF_PORT", 00:23:19.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.276 "hdgst": ${hdgst:-false}, 00:23:19.276 "ddgst": ${ddgst:-false} 00:23:19.276 }, 00:23:19.276 "method": "bdev_nvme_attach_controller" 00:23:19.276 } 00:23:19.276 EOF 00:23:19.276 )") 00:23:19.276 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.276 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.276 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.276 { 00:23:19.276 "params": { 00:23:19.276 "name": "Nvme$subsystem", 00:23:19.276 "trtype": "$TEST_TRANSPORT", 00:23:19.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.276 "adrfam": "ipv4", 00:23:19.276 "trsvcid": "$NVMF_PORT", 00:23:19.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.276 "hdgst": ${hdgst:-false}, 00:23:19.276 "ddgst": ${ddgst:-false} 00:23:19.276 }, 00:23:19.276 "method": "bdev_nvme_attach_controller" 00:23:19.276 } 00:23:19.276 EOF 00:23:19.276 )") 00:23:19.276 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.276 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.276 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.276 { 00:23:19.276 "params": { 00:23:19.276 "name": "Nvme$subsystem", 00:23:19.276 "trtype": "$TEST_TRANSPORT", 00:23:19.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.276 "adrfam": "ipv4", 00:23:19.276 "trsvcid": "$NVMF_PORT", 00:23:19.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.276 "hdgst": ${hdgst:-false}, 00:23:19.276 "ddgst": ${ddgst:-false} 00:23:19.276 }, 00:23:19.276 "method": "bdev_nvme_attach_controller" 00:23:19.276 } 00:23:19.276 EOF 00:23:19.276 )") 00:23:19.276 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.537 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.537 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.537 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.537 { 00:23:19.537 "params": { 00:23:19.537 "name": "Nvme$subsystem", 00:23:19.537 "trtype": "$TEST_TRANSPORT", 00:23:19.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.537 "adrfam": "ipv4", 00:23:19.537 "trsvcid": "$NVMF_PORT", 00:23:19.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.537 "hdgst": ${hdgst:-false}, 00:23:19.537 "ddgst": ${ddgst:-false} 00:23:19.537 }, 00:23:19.537 "method": "bdev_nvme_attach_controller" 00:23:19.537 } 00:23:19.537 EOF 00:23:19.537 )") 00:23:19.537 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.537 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:19.537 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:19.537 22:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:19.537 "params": { 00:23:19.537 "name": "Nvme1", 00:23:19.537 "trtype": "tcp", 00:23:19.537 "traddr": "10.0.0.2", 00:23:19.537 "adrfam": "ipv4", 00:23:19.537 "trsvcid": "4420", 00:23:19.537 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.537 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.537 "hdgst": false, 00:23:19.537 "ddgst": false 00:23:19.537 }, 00:23:19.537 "method": "bdev_nvme_attach_controller" 00:23:19.537 },{ 00:23:19.537 "params": { 00:23:19.537 "name": "Nvme2", 00:23:19.537 "trtype": "tcp", 00:23:19.537 "traddr": "10.0.0.2", 00:23:19.537 "adrfam": "ipv4", 00:23:19.537 "trsvcid": "4420", 00:23:19.537 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:19.537 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:19.537 "hdgst": false, 00:23:19.537 "ddgst": false 00:23:19.537 }, 00:23:19.537 "method": "bdev_nvme_attach_controller" 00:23:19.537 },{ 00:23:19.537 "params": { 00:23:19.537 "name": "Nvme3", 00:23:19.537 "trtype": "tcp", 00:23:19.537 "traddr": "10.0.0.2", 00:23:19.537 "adrfam": "ipv4", 00:23:19.537 "trsvcid": "4420", 00:23:19.537 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:19.537 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:19.537 "hdgst": false, 00:23:19.537 "ddgst": false 00:23:19.537 }, 00:23:19.537 "method": "bdev_nvme_attach_controller" 00:23:19.537 },{ 00:23:19.537 "params": { 00:23:19.537 "name": "Nvme4", 00:23:19.537 "trtype": "tcp", 00:23:19.537 "traddr": "10.0.0.2", 00:23:19.537 "adrfam": "ipv4", 00:23:19.537 "trsvcid": "4420", 00:23:19.537 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:19.537 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:19.537 "hdgst": false, 00:23:19.537 "ddgst": false 00:23:19.537 }, 00:23:19.537 "method": "bdev_nvme_attach_controller" 00:23:19.537 },{ 00:23:19.537 "params": { 00:23:19.537 "name": "Nvme5", 00:23:19.537 "trtype": "tcp", 00:23:19.537 "traddr": "10.0.0.2", 00:23:19.537 "adrfam": "ipv4", 00:23:19.537 "trsvcid": "4420", 00:23:19.537 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:19.537 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:19.538 "hdgst": false, 00:23:19.538 "ddgst": false 00:23:19.538 }, 00:23:19.538 "method": "bdev_nvme_attach_controller" 00:23:19.538 },{ 00:23:19.538 "params": { 00:23:19.538 "name": "Nvme6", 00:23:19.538 "trtype": "tcp", 00:23:19.538 "traddr": "10.0.0.2", 00:23:19.538 "adrfam": "ipv4", 00:23:19.538 "trsvcid": "4420", 00:23:19.538 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:19.538 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:19.538 "hdgst": false, 00:23:19.538 "ddgst": false 00:23:19.538 }, 00:23:19.538 "method": "bdev_nvme_attach_controller" 00:23:19.538 },{ 00:23:19.538 "params": { 00:23:19.538 "name": "Nvme7", 00:23:19.538 "trtype": "tcp", 00:23:19.538 "traddr": "10.0.0.2", 00:23:19.538 "adrfam": "ipv4", 00:23:19.538 "trsvcid": "4420", 00:23:19.538 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:19.538 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:19.538 "hdgst": false, 00:23:19.538 "ddgst": false 00:23:19.538 }, 00:23:19.538 "method": "bdev_nvme_attach_controller" 00:23:19.538 },{ 00:23:19.538 "params": { 00:23:19.538 "name": "Nvme8", 00:23:19.538 "trtype": "tcp", 00:23:19.538 "traddr": "10.0.0.2", 00:23:19.538 "adrfam": "ipv4", 00:23:19.538 "trsvcid": "4420", 00:23:19.538 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:19.538 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:19.538 "hdgst": false, 00:23:19.538 "ddgst": false 00:23:19.538 }, 00:23:19.538 "method": "bdev_nvme_attach_controller" 00:23:19.538 },{ 00:23:19.538 "params": { 00:23:19.538 "name": "Nvme9", 00:23:19.538 "trtype": "tcp", 00:23:19.538 "traddr": "10.0.0.2", 00:23:19.538 "adrfam": "ipv4", 00:23:19.538 "trsvcid": "4420", 00:23:19.538 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:19.538 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:19.538 "hdgst": false, 00:23:19.538 "ddgst": false 00:23:19.538 }, 00:23:19.538 "method": "bdev_nvme_attach_controller" 00:23:19.538 },{ 00:23:19.538 "params": { 00:23:19.538 "name": "Nvme10", 00:23:19.538 "trtype": "tcp", 00:23:19.538 "traddr": "10.0.0.2", 00:23:19.538 "adrfam": "ipv4", 00:23:19.538 "trsvcid": "4420", 00:23:19.538 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:19.538 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:19.538 "hdgst": false, 00:23:19.538 "ddgst": false 00:23:19.538 }, 00:23:19.538 "method": "bdev_nvme_attach_controller" 00:23:19.538 }' 00:23:19.538 [2024-07-15 22:20:44.637958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.538 [2024-07-15 22:20:44.702620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.922 Running I/O for 10 seconds... 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.922 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.183 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:21.183 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:21.183 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:21.444 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:21.444 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:21.444 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:21.444 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:21.444 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.444 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.444 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.444 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:21.444 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:21.444 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2849661 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2849661 ']' 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2849661 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2849661 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2849661' 00:23:21.719 killing process with pid 2849661 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2849661 00:23:21.719 22:20:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2849661 00:23:21.719 [2024-07-15 22:20:46.916055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99ae0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916745] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.719 [2024-07-15 22:20:46.916775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916814] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916886] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916900] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916949] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916962] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.916967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919258] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.720 [2024-07-15 22:20:46.919311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919316] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919353] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919376] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919394] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.919403] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ada0 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920089] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920120] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920128] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920141] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920232] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920274] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920304] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920328] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920369] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.920386] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b240 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.921636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.921658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.921664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.921669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.921673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.921678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.721 [2024-07-15 22:20:46.921682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921851] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921860] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921884] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.921924] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1450 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.927895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.722 [2024-07-15 22:20:46.927929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.722 [2024-07-15 22:20:46.927940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.722 [2024-07-15 22:20:46.927948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.722 [2024-07-15 22:20:46.927957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.722 [2024-07-15 22:20:46.927964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.722 [2024-07-15 22:20:46.927973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.722 [2024-07-15 22:20:46.927980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.722 [2024-07-15 22:20:46.927987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13411b0 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.928017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.722 [2024-07-15 22:20:46.928026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.722 [2024-07-15 22:20:46.928034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.722 [2024-07-15 22:20:46.928046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.722 [2024-07-15 22:20:46.928055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.722 [2024-07-15 22:20:46.928062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.722 [2024-07-15 22:20:46.928070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.722 [2024-07-15 22:20:46.928078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.722 [2024-07-15 22:20:46.928085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70340 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.928110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.722 [2024-07-15 22:20:46.928119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.722 [2024-07-15 22:20:46.928134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.722 [2024-07-15 22:20:46.928142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.722 [2024-07-15 22:20:46.928150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.722 [2024-07-15 22:20:46.928157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.722 [2024-07-15 22:20:46.928166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.722 [2024-07-15 22:20:46.928173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.722 [2024-07-15 22:20:46.928180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135b030 is same with the state(5) to be set 00:23:21.722 [2024-07-15 22:20:46.928203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.722 [2024-07-15 22:20:46.928212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.722 [2024-07-15 22:20:46.928220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.722 [2024-07-15 22:20:46.928227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.722 [2024-07-15 22:20:46.928235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5d0 is same with the state(5) to be set 00:23:21.723 [2024-07-15 22:20:46.928284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135aca0 is same with the state(5) to be set 00:23:21.723 [2024-07-15 22:20:46.928369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14df0c0 is same with the state(5) to be set 00:23:21.723 [2024-07-15 22:20:46.928455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ea210 is same with the state(5) to be set 00:23:21.723 [2024-07-15 22:20:46.928535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2990 is same with the state(5) to be set 00:23:21.723 [2024-07-15 22:20:46.928619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bbe90 is same with the state(5) to be set 00:23:21.723 [2024-07-15 22:20:46.928701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.723 [2024-07-15 22:20:46.928754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.928763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e0290 is same with the state(5) to be set 00:23:21.723 [2024-07-15 22:20:46.929049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.723 [2024-07-15 22:20:46.929069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.929083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.723 [2024-07-15 22:20:46.929091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.929100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.723 [2024-07-15 22:20:46.929108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.929117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.723 [2024-07-15 22:20:46.929131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.723 [2024-07-15 22:20:46.929141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-07-15 22:20:46.929802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.724 [2024-07-15 22:20:46.929811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.929817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.929827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.929834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.929843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.929850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.929859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.929866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.929875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.929884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.929893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.929900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.929909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.929917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.929926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.929933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.929942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.929949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.929958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.929965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.929975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.929982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.929991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.929998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14594e0 is same with the state(5) to be set 00:23:21.725 [2024-07-15 22:20:46.930178] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14594e0 was disconnected and freed. reset controller. 00:23:21.725 [2024-07-15 22:20:46.930256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.725 [2024-07-15 22:20:46.930569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-07-15 22:20:46.930576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.930992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.930998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.931008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.931015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.931025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.931032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.931041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.931048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.931057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.931064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.931073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.931080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.931090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.931097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.931106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.931113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.931127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.938651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.938697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.938706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.938716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.938724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.938734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.938741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.938750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.938757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.938766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.938774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.938783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.938795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-07-15 22:20:46.938805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-07-15 22:20:46.938812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.938822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.938829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.938839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.938846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.938855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.938862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.938930] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x145a970 was disconnected and freed. reset controller. 00:23:21.727 [2024-07-15 22:20:46.939025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.727 [2024-07-15 22:20:46.939664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-07-15 22:20:46.939671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.939988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.939995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.940004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.940011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.940020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.940027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.940036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.940043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.940052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.940060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.940069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.940075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.940131] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1318530 was disconnected and freed. reset controller. 00:23:21.728 [2024-07-15 22:20:46.940324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.940336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.940349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.940356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.940366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.940373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.940382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.940389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.940398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.940405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.940415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-07-15 22:20:46.940422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.728 [2024-07-15 22:20:46.940435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.940988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.940995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.941004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.941011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.941020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.941027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.941036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.941045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.941054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.941061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.941070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.941078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.941087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-07-15 22:20:46.941093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.729 [2024-07-15 22:20:46.941103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941431] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13c4300 was disconnected and freed. reset controller. 00:23:21.730 [2024-07-15 22:20:46.941455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.941684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.941692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.946872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.946906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.946918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.946931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.946941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.946948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.946958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.946965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.946974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.946981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.946990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.946997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.947007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.947014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.947023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.947030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.947039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.947046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.947055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-07-15 22:20:46.947062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.730 [2024-07-15 22:20:46.947071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-07-15 22:20:46.947785] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13c5770 was disconnected and freed. reset controller. 00:23:21.731 [2024-07-15 22:20:46.947879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-07-15 22:20:46.947888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.947903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.947910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.947923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.947930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.947939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.947946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.947956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.947963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.947972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.947979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.947988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.947995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.732 [2024-07-15 22:20:46.948457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-07-15 22:20:46.948464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.948938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.948989] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13c6af0 was disconnected and freed. reset controller. 00:23:21.733 [2024-07-15 22:20:46.949130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13411b0 (9): Bad file descriptor 00:23:21.733 [2024-07-15 22:20:46.949151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe70340 (9): Bad file descriptor 00:23:21.733 [2024-07-15 22:20:46.949164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135b030 (9): Bad file descriptor 00:23:21.733 [2024-07-15 22:20:46.949180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131e5d0 (9): Bad file descriptor 00:23:21.733 [2024-07-15 22:20:46.949196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135aca0 (9): Bad file descriptor 00:23:21.733 [2024-07-15 22:20:46.949208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14df0c0 (9): Bad file descriptor 00:23:21.733 [2024-07-15 22:20:46.949221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ea210 (9): Bad file descriptor 00:23:21.733 [2024-07-15 22:20:46.949233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f2990 (9): Bad file descriptor 00:23:21.733 [2024-07-15 22:20:46.949245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bbe90 (9): Bad file descriptor 00:23:21.733 [2024-07-15 22:20:46.949257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e0290 (9): Bad file descriptor 00:23:21.733 [2024-07-15 22:20:46.949338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.949347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.949359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.949366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.949375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.949382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.949391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.949398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.949407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.949414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.949424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.949431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.949440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-07-15 22:20:46.949447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.733 [2024-07-15 22:20:46.949457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.949984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.949993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.950000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.950009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.950016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.950025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.950032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.950041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.950048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.950057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.950064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.950073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.950080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.950089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.950099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.950108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.950115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.950129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-07-15 22:20:46.950137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.734 [2024-07-15 22:20:46.950146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.950153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.950162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.950169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.950178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.950185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.950194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.950201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.950211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.950218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.950226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.950233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.950243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.954115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.954163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.954172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.954182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.954190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.954200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.954207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.954216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.954228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.954237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.954245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.954254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.954261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.954270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.954277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.954286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.954293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.954302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.954309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.954373] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13c24b0 was disconnected and freed. reset controller. 00:23:21.735 [2024-07-15 22:20:46.961698] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.735 [2024-07-15 22:20:46.961726] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.735 [2024-07-15 22:20:46.961745] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.735 [2024-07-15 22:20:46.961757] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.735 [2024-07-15 22:20:46.961774] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.735 [2024-07-15 22:20:46.961786] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.735 [2024-07-15 22:20:46.961803] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.735 [2024-07-15 22:20:46.963209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:21.735 [2024-07-15 22:20:46.963234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:21.735 [2024-07-15 22:20:46.963247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:21.735 [2024-07-15 22:20:46.964091] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.735 [2024-07-15 22:20:46.964140] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.735 [2024-07-15 22:20:46.964939] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.735 [2024-07-15 22:20:46.965380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.735 [2024-07-15 22:20:46.965419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ea210 with addr=10.0.0.2, port=4420 00:23:21.735 [2024-07-15 22:20:46.965432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ea210 is same with the state(5) to be set 00:23:21.735 [2024-07-15 22:20:46.965875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.735 [2024-07-15 22:20:46.965886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f2990 with addr=10.0.0.2, port=4420 00:23:21.735 [2024-07-15 22:20:46.965893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2990 is same with the state(5) to be set 00:23:21.735 [2024-07-15 22:20:46.966415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.735 [2024-07-15 22:20:46.966453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x135aca0 with addr=10.0.0.2, port=4420 00:23:21.735 [2024-07-15 22:20:46.966464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135aca0 is same with the state(5) to be set 00:23:21.735 [2024-07-15 22:20:46.966803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.966816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.966832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.966840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.966849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.966856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.966865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.966873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.966882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.966888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.966897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.966904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.966913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.966920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.966930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.966937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.966946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.966953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.966962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.966969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.966983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.966990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.967000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.967007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.967017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.967024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.967033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.967040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.967050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.967057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.967066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-07-15 22:20:46.967073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-07-15 22:20:46.967082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.736 [2024-07-15 22:20:46.967670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-07-15 22:20:46.967677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.967687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.967694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.967703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.967710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.967719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.967726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.967736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.967743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.967752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.967759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.967768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.967775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.967784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.967791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.967800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.967808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.967819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.967826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.967836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.967843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.967852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.967859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.967868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.967875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.967883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13199c0 is same with the state(5) to be set 00:23:21.737 [2024-07-15 22:20:46.969179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.737 [2024-07-15 22:20:46.969625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.737 [2024-07-15 22:20:46.969634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.969987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.969996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.970269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.970277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2e70 is same with the state(5) to be set 00:23:21.738 [2024-07-15 22:20:46.971848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.971866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.971878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.738 [2024-07-15 22:20:46.971886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 22:20:46.971895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.971903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.971912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.971919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.971928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.971936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.971945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.971952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.971961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.971972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.971981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.971988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.971997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.739 [2024-07-15 22:20:46.972585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.739 [2024-07-15 22:20:46.972596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.740 [2024-07-15 22:20:46.972914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.740 [2024-07-15 22:20:46.972922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7f60 is same with the state(5) to be set 00:23:21.740 [2024-07-15 22:20:46.974411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:21.740 [2024-07-15 22:20:46.974433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:21.740 [2024-07-15 22:20:46.974443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:21.740 [2024-07-15 22:20:46.974452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:21.740 [2024-07-15 22:20:46.974462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:21.740 [2024-07-15 22:20:46.974472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:21.740 [2024-07-15 22:20:46.974528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ea210 (9): Bad file descriptor 00:23:21.740 [2024-07-15 22:20:46.974540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f2990 (9): Bad file descriptor 00:23:21.740 [2024-07-15 22:20:46.974549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135aca0 (9): Bad file descriptor 00:23:21.740 [2024-07-15 22:20:46.974583] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.740 [2024-07-15 22:20:46.974600] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.740 [2024-07-15 22:20:46.974614] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.740 [2024-07-15 22:20:46.974625] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.740 task offset: 25344 on job bdev=Nvme2n1 fails 00:23:21.740 00:23:21.740 Latency(us) 00:23:21.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.740 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.740 Job: Nvme1n1 ended in about 0.95 seconds with error 00:23:21.740 Verification LBA range: start 0x0 length 0x400 00:23:21.740 Nvme1n1 : 0.95 135.02 8.44 67.51 0.00 312464.50 23920.64 286610.77 00:23:21.740 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.740 Job: Nvme2n1 ended in about 0.94 seconds with error 00:23:21.740 Verification LBA range: start 0x0 length 0x400 00:23:21.740 Nvme2n1 : 0.94 204.13 12.76 68.04 0.00 227555.84 20753.07 246415.36 00:23:21.740 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.740 Job: Nvme3n1 ended in about 0.94 seconds with error 00:23:21.740 Verification LBA range: start 0x0 length 0x400 00:23:21.740 Nvme3n1 : 0.94 203.86 12.74 67.95 0.00 223025.71 21845.33 248162.99 00:23:21.740 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.740 Job: Nvme4n1 ended in about 0.94 seconds with error 00:23:21.740 Verification LBA range: start 0x0 length 0x400 00:23:21.740 Nvme4n1 : 0.94 203.60 12.73 67.87 0.00 218426.67 22063.79 248162.99 00:23:21.740 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.740 Job: Nvme5n1 ended in about 0.95 seconds with error 00:23:21.740 Verification LBA range: start 0x0 length 0x400 00:23:21.740 Nvme5n1 : 0.95 134.16 8.38 67.08 0.00 288590.51 23592.96 276125.01 00:23:21.740 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.740 Job: Nvme6n1 ended in about 0.96 seconds with error 00:23:21.740 Verification LBA range: start 0x0 length 0x400 00:23:21.740 Nvme6n1 : 0.96 138.01 8.63 66.91 0.00 277243.47 22609.92 251658.24 00:23:21.740 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.740 Job: Nvme7n1 ended in about 0.94 seconds with error 00:23:21.740 Verification LBA range: start 0x0 length 0x400 00:23:21.740 Nvme7n1 : 0.94 203.33 12.71 67.78 0.00 204303.36 23265.28 248162.99 00:23:21.740 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.740 Job: Nvme8n1 ended in about 0.95 seconds with error 00:23:21.740 Verification LBA range: start 0x0 length 0x400 00:23:21.740 Nvme8n1 : 0.95 135.38 8.46 67.69 0.00 266398.44 23374.51 260396.37 00:23:21.740 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.740 Job: Nvme9n1 ended in about 0.95 seconds with error 00:23:21.740 Verification LBA range: start 0x0 length 0x400 00:23:21.740 Nvme9n1 : 0.95 202.83 12.68 67.61 0.00 195170.13 23265.28 249910.61 00:23:21.740 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.740 Job: Nvme10n1 ended in about 0.96 seconds with error 00:23:21.740 Verification LBA range: start 0x0 length 0x400 00:23:21.740 Nvme10n1 : 0.96 133.46 8.34 66.73 0.00 258052.55 22063.79 255153.49 00:23:21.740 =================================================================================================================== 00:23:21.740 Total : 1693.77 105.86 675.17 0.00 242410.07 20753.07 286610.77 00:23:21.740 [2024-07-15 22:20:46.999583] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:21.740 [2024-07-15 22:20:46.999621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:21.740 [2024-07-15 22:20:47.000048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.740 [2024-07-15 22:20:47.000071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe70340 with addr=10.0.0.2, port=4420 00:23:21.740 [2024-07-15 22:20:47.000080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70340 is same with the state(5) to be set 00:23:21.740 [2024-07-15 22:20:47.000526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.740 [2024-07-15 22:20:47.000536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e0290 with addr=10.0.0.2, port=4420 00:23:21.741 [2024-07-15 22:20:47.000544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e0290 is same with the state(5) to be set 00:23:21.741 [2024-07-15 22:20:47.000973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.741 [2024-07-15 22:20:47.000982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13411b0 with addr=10.0.0.2, port=4420 00:23:21.741 [2024-07-15 22:20:47.000990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13411b0 is same with the state(5) to be set 00:23:21.741 [2024-07-15 22:20:47.001385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.741 [2024-07-15 22:20:47.001395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131e5d0 with addr=10.0.0.2, port=4420 00:23:21.741 [2024-07-15 22:20:47.001402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5d0 is same with the state(5) to be set 00:23:21.741 [2024-07-15 22:20:47.001626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.741 [2024-07-15 22:20:47.001639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x135b030 with addr=10.0.0.2, port=4420 00:23:21.741 [2024-07-15 22:20:47.001646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135b030 is same with the state(5) to be set 00:23:21.741 [2024-07-15 22:20:47.001945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.741 [2024-07-15 22:20:47.001956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14bbe90 with addr=10.0.0.2, port=4420 00:23:21.741 [2024-07-15 22:20:47.001963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bbe90 is same with the state(5) to be set 00:23:21.741 [2024-07-15 22:20:47.001971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:21.741 [2024-07-15 22:20:47.001978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:21.741 [2024-07-15 22:20:47.001986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:21.741 [2024-07-15 22:20:47.002003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:21.741 [2024-07-15 22:20:47.002009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:21.741 [2024-07-15 22:20:47.002016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:21.741 [2024-07-15 22:20:47.002027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:21.741 [2024-07-15 22:20:47.002033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:21.741 [2024-07-15 22:20:47.002039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:21.741 [2024-07-15 22:20:47.002883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.741 [2024-07-15 22:20:47.002894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.741 [2024-07-15 22:20:47.002900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.741 [2024-07-15 22:20:47.003344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.741 [2024-07-15 22:20:47.003356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14df0c0 with addr=10.0.0.2, port=4420 00:23:21.741 [2024-07-15 22:20:47.003366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14df0c0 is same with the state(5) to be set 00:23:21.741 [2024-07-15 22:20:47.003379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe70340 (9): Bad file descriptor 00:23:21.741 [2024-07-15 22:20:47.003390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e0290 (9): Bad file descriptor 00:23:21.741 [2024-07-15 22:20:47.003398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13411b0 (9): Bad file descriptor 00:23:21.741 [2024-07-15 22:20:47.003407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131e5d0 (9): Bad file descriptor 00:23:21.741 [2024-07-15 22:20:47.003416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135b030 (9): Bad file descriptor 00:23:21.741 [2024-07-15 22:20:47.003424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bbe90 (9): Bad file descriptor 00:23:21.741 [2024-07-15 22:20:47.003467] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.741 [2024-07-15 22:20:47.003479] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.741 [2024-07-15 22:20:47.003489] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.741 [2024-07-15 22:20:47.003499] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.741 [2024-07-15 22:20:47.003509] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.741 [2024-07-15 22:20:47.003519] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.741 [2024-07-15 22:20:47.003584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14df0c0 (9): Bad file descriptor 00:23:21.741 [2024-07-15 22:20:47.003594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:21.741 [2024-07-15 22:20:47.003600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:21.741 [2024-07-15 22:20:47.003607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:21.741 [2024-07-15 22:20:47.003616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:21.741 [2024-07-15 22:20:47.003623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:21.741 [2024-07-15 22:20:47.003629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:21.741 [2024-07-15 22:20:47.003639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:21.741 [2024-07-15 22:20:47.003645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:21.741 [2024-07-15 22:20:47.003651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:21.741 [2024-07-15 22:20:47.003660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:21.741 [2024-07-15 22:20:47.003667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:21.741 [2024-07-15 22:20:47.003674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:21.741 [2024-07-15 22:20:47.003683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:21.741 [2024-07-15 22:20:47.003689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:21.741 [2024-07-15 22:20:47.003696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:21.741 [2024-07-15 22:20:47.003707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:21.741 [2024-07-15 22:20:47.003713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:21.741 [2024-07-15 22:20:47.003720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:21.741 [2024-07-15 22:20:47.003769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:21.741 [2024-07-15 22:20:47.003780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:21.741 [2024-07-15 22:20:47.003789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:21.741 [2024-07-15 22:20:47.003797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.741 [2024-07-15 22:20:47.003803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.741 [2024-07-15 22:20:47.003809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.741 [2024-07-15 22:20:47.003814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.741 [2024-07-15 22:20:47.003837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:21.741 [2024-07-15 22:20:47.003844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:21.741 [2024-07-15 22:20:47.003850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:21.741 [2024-07-15 22:20:47.003870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.741 [2024-07-15 22:20:47.003876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.741 [2024-07-15 22:20:47.003890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.741 [2024-07-15 22:20:47.004208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.741 [2024-07-15 22:20:47.004219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x135aca0 with addr=10.0.0.2, port=4420 00:23:21.741 [2024-07-15 22:20:47.004226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135aca0 is same with the state(5) to be set 00:23:21.741 [2024-07-15 22:20:47.004621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.741 [2024-07-15 22:20:47.004632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f2990 with addr=10.0.0.2, port=4420 00:23:21.741 [2024-07-15 22:20:47.004639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2990 is same with the state(5) to be set 00:23:21.741 [2024-07-15 22:20:47.005046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.741 [2024-07-15 22:20:47.005055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ea210 with addr=10.0.0.2, port=4420 00:23:21.741 [2024-07-15 22:20:47.005062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ea210 is same with the state(5) to be set 00:23:21.741 [2024-07-15 22:20:47.005090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135aca0 (9): Bad file descriptor 00:23:21.742 [2024-07-15 22:20:47.005100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f2990 (9): Bad file descriptor 00:23:21.742 [2024-07-15 22:20:47.005109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ea210 (9): Bad file descriptor 00:23:21.742 [2024-07-15 22:20:47.005138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:21.742 [2024-07-15 22:20:47.005145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:21.742 [2024-07-15 22:20:47.005152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:21.742 [2024-07-15 22:20:47.005164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:21.742 [2024-07-15 22:20:47.005171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:21.742 [2024-07-15 22:20:47.005177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:21.742 [2024-07-15 22:20:47.005186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:21.742 [2024-07-15 22:20:47.005192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:21.742 [2024-07-15 22:20:47.005198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:21.742 [2024-07-15 22:20:47.005226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.742 [2024-07-15 22:20:47.005232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.742 [2024-07-15 22:20:47.005238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.019 22:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:22.020 22:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2849869 00:23:22.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2849869) - No such process 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:22.974 rmmod nvme_tcp 00:23:22.974 rmmod nvme_fabrics 00:23:22.974 rmmod nvme_keyring 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.974 22:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.519 22:20:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:25.519 00:23:25.519 real 0m7.585s 00:23:25.519 user 0m17.796s 00:23:25.519 sys 0m1.253s 00:23:25.519 22:20:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:25.519 22:20:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:25.519 ************************************ 00:23:25.519 END TEST nvmf_shutdown_tc3 00:23:25.519 ************************************ 00:23:25.519 22:20:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:25.519 22:20:50 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:25.519 00:23:25.519 real 0m32.165s 00:23:25.519 user 1m14.916s 00:23:25.519 sys 0m9.246s 00:23:25.519 22:20:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:25.519 22:20:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:25.519 ************************************ 00:23:25.519 END TEST nvmf_shutdown 00:23:25.519 ************************************ 00:23:25.519 22:20:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:25.519 22:20:50 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:25.519 22:20:50 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:25.519 22:20:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.519 22:20:50 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:25.519 22:20:50 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.519 22:20:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.519 22:20:50 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:25.519 22:20:50 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:25.519 22:20:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:25.519 22:20:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.519 22:20:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.519 ************************************ 00:23:25.519 START TEST nvmf_multicontroller 00:23:25.519 ************************************ 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:25.519 * Looking for test storage... 00:23:25.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:25.519 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:25.520 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:25.520 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.520 22:20:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.520 22:20:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.520 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:25.520 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:25.520 22:20:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:25.520 22:20:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.662 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:33.663 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:33.663 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:33.663 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:33.663 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:33.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:23:33.663 00:23:33.663 --- 10.0.0.2 ping statistics --- 00:23:33.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.663 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:23:33.663 00:23:33.663 --- 10.0.0.1 ping statistics --- 00:23:33.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.663 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2854880 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2854880 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2854880 ']' 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.663 22:20:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.663 [2024-07-15 22:20:57.875828] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:23:33.663 [2024-07-15 22:20:57.875876] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.663 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.663 [2024-07-15 22:20:57.959335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:33.663 [2024-07-15 22:20:58.024293] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.663 [2024-07-15 22:20:58.024344] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.663 [2024-07-15 22:20:58.024352] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.664 [2024-07-15 22:20:58.024359] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.664 [2024-07-15 22:20:58.024365] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.664 [2024-07-15 22:20:58.024468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.664 [2024-07-15 22:20:58.024623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.664 [2024-07-15 22:20:58.024624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 [2024-07-15 22:20:58.740101] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 Malloc0 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 [2024-07-15 22:20:58.814785] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 [2024-07-15 22:20:58.826736] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 Malloc1 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2855067 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2855067 /var/tmp/bdevperf.sock 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2855067 ']' 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.664 22:20:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.603 NVMe0n1 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.603 1 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.603 request: 00:23:34.603 { 00:23:34.603 "name": "NVMe0", 00:23:34.603 "trtype": "tcp", 00:23:34.603 "traddr": "10.0.0.2", 00:23:34.603 "adrfam": "ipv4", 00:23:34.603 "trsvcid": "4420", 00:23:34.603 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.603 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:34.603 "hostaddr": "10.0.0.2", 00:23:34.603 "hostsvcid": "60000", 00:23:34.603 "prchk_reftag": false, 00:23:34.603 "prchk_guard": false, 00:23:34.603 "hdgst": false, 00:23:34.603 "ddgst": false, 00:23:34.603 "method": "bdev_nvme_attach_controller", 00:23:34.603 "req_id": 1 00:23:34.603 } 00:23:34.603 Got JSON-RPC error response 00:23:34.603 response: 00:23:34.603 { 00:23:34.603 "code": -114, 00:23:34.603 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:34.603 } 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.603 request: 00:23:34.603 { 00:23:34.603 "name": "NVMe0", 00:23:34.603 "trtype": "tcp", 00:23:34.603 "traddr": "10.0.0.2", 00:23:34.603 "adrfam": "ipv4", 00:23:34.603 "trsvcid": "4420", 00:23:34.603 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:34.603 "hostaddr": "10.0.0.2", 00:23:34.603 "hostsvcid": "60000", 00:23:34.603 "prchk_reftag": false, 00:23:34.603 "prchk_guard": false, 00:23:34.603 "hdgst": false, 00:23:34.603 "ddgst": false, 00:23:34.603 "method": "bdev_nvme_attach_controller", 00:23:34.603 "req_id": 1 00:23:34.603 } 00:23:34.603 Got JSON-RPC error response 00:23:34.603 response: 00:23:34.603 { 00:23:34.603 "code": -114, 00:23:34.603 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:34.603 } 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.603 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:34.604 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.604 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.604 request: 00:23:34.604 { 00:23:34.604 "name": "NVMe0", 00:23:34.604 "trtype": "tcp", 00:23:34.604 "traddr": "10.0.0.2", 00:23:34.604 "adrfam": "ipv4", 00:23:34.604 "trsvcid": "4420", 00:23:34.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.604 "hostaddr": "10.0.0.2", 00:23:34.604 "hostsvcid": "60000", 00:23:34.604 "prchk_reftag": false, 00:23:34.604 "prchk_guard": false, 00:23:34.604 "hdgst": false, 00:23:34.604 "ddgst": false, 00:23:34.604 "multipath": "disable", 00:23:34.604 "method": "bdev_nvme_attach_controller", 00:23:34.604 "req_id": 1 00:23:34.604 } 00:23:34.604 Got JSON-RPC error response 00:23:34.604 response: 00:23:34.604 { 00:23:34.604 "code": -114, 00:23:34.604 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:34.604 } 00:23:34.604 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:34.604 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:34.604 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.604 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.604 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.604 22:20:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:34.604 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:34.604 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:34.604 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:34.604 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.604 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:34.863 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.863 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:34.863 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.863 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.863 request: 00:23:34.863 { 00:23:34.863 "name": "NVMe0", 00:23:34.863 "trtype": "tcp", 00:23:34.863 "traddr": "10.0.0.2", 00:23:34.863 "adrfam": "ipv4", 00:23:34.863 "trsvcid": "4420", 00:23:34.863 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.863 "hostaddr": "10.0.0.2", 00:23:34.863 "hostsvcid": "60000", 00:23:34.863 "prchk_reftag": false, 00:23:34.863 "prchk_guard": false, 00:23:34.863 "hdgst": false, 00:23:34.863 "ddgst": false, 00:23:34.863 "multipath": "failover", 00:23:34.863 "method": "bdev_nvme_attach_controller", 00:23:34.863 "req_id": 1 00:23:34.863 } 00:23:34.863 Got JSON-RPC error response 00:23:34.863 response: 00:23:34.863 { 00:23:34.863 "code": -114, 00:23:34.863 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:34.863 } 00:23:34.863 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:34.863 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:34.863 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.863 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.863 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.863 22:20:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:34.863 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.863 22:20:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.863 00:23:34.863 22:21:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.864 22:21:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:34.864 22:21:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.864 22:21:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.123 22:21:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.123 22:21:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:35.123 22:21:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.123 22:21:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.123 00:23:35.123 22:21:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.123 22:21:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:35.123 22:21:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:35.123 22:21:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.123 22:21:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.123 22:21:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.123 22:21:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:35.123 22:21:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:36.505 0 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2855067 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2855067 ']' 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2855067 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2855067 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2855067' 00:23:36.505 killing process with pid 2855067 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2855067 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2855067 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.505 22:21:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:36.506 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:36.506 [2024-07-15 22:20:58.946705] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:23:36.506 [2024-07-15 22:20:58.946756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855067 ] 00:23:36.506 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.506 [2024-07-15 22:20:59.004844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.506 [2024-07-15 22:20:59.069996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.506 [2024-07-15 22:21:00.379090] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name fcc90653-7e47-4125-a7c4-6bdee396f8bc already exists 00:23:36.506 [2024-07-15 22:21:00.379120] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:fcc90653-7e47-4125-a7c4-6bdee396f8bc alias for bdev NVMe1n1 00:23:36.506 [2024-07-15 22:21:00.379133] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:36.506 Running I/O for 1 seconds... 00:23:36.506 00:23:36.506 Latency(us) 00:23:36.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.506 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:36.506 NVMe0n1 : 1.00 27990.57 109.34 0.00 0.00 4558.18 4014.08 10540.37 00:23:36.506 =================================================================================================================== 00:23:36.506 Total : 27990.57 109.34 0.00 0.00 4558.18 4014.08 10540.37 00:23:36.506 Received shutdown signal, test time was about 1.000000 seconds 00:23:36.506 00:23:36.506 Latency(us) 00:23:36.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.506 =================================================================================================================== 00:23:36.506 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.506 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.506 rmmod nvme_tcp 00:23:36.506 rmmod nvme_fabrics 00:23:36.506 rmmod nvme_keyring 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2854880 ']' 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2854880 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2854880 ']' 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2854880 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:36.506 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.767 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2854880 00:23:36.767 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:36.767 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:36.767 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2854880' 00:23:36.767 killing process with pid 2854880 00:23:36.767 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2854880 00:23:36.767 22:21:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2854880 00:23:36.767 22:21:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.767 22:21:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.767 22:21:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.767 22:21:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.767 22:21:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.767 22:21:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.767 22:21:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.767 22:21:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.321 22:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.321 00:23:39.321 real 0m13.568s 00:23:39.321 user 0m17.103s 00:23:39.321 sys 0m6.075s 00:23:39.321 22:21:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:39.321 22:21:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.321 ************************************ 00:23:39.321 END TEST nvmf_multicontroller 00:23:39.321 ************************************ 00:23:39.321 22:21:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:39.321 22:21:04 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:39.321 22:21:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:39.321 22:21:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:39.321 22:21:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.321 ************************************ 00:23:39.321 START TEST nvmf_aer 00:23:39.321 ************************************ 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:39.321 * Looking for test storage... 00:23:39.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.321 22:21:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:45.914 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.914 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:45.915 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:45.915 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:45.915 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.915 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:46.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:23:46.177 00:23:46.177 --- 10.0.0.2 ping statistics --- 00:23:46.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.177 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:23:46.177 00:23:46.177 --- 10.0.0.1 ping statistics --- 00:23:46.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.177 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2860315 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2860315 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2860315 ']' 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:46.177 22:21:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:46.177 [2024-07-15 22:21:11.491798] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:23:46.177 [2024-07-15 22:21:11.491863] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.437 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.437 [2024-07-15 22:21:11.563065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:46.437 [2024-07-15 22:21:11.642581] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.437 [2024-07-15 22:21:11.642627] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.437 [2024-07-15 22:21:11.642635] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.437 [2024-07-15 22:21:11.642641] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.437 [2024-07-15 22:21:11.642647] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.437 [2024-07-15 22:21:11.642848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.437 [2024-07-15 22:21:11.642966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.437 [2024-07-15 22:21:11.643133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.437 [2024-07-15 22:21:11.643146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:47.005 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.005 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:23:47.005 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:47.005 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:47.005 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.005 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.005 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:47.005 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.005 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.005 [2024-07-15 22:21:12.312728] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.005 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.005 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:47.005 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.005 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.265 Malloc0 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.265 [2024-07-15 22:21:12.353273] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.265 [ 00:23:47.265 { 00:23:47.265 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:47.265 "subtype": "Discovery", 00:23:47.265 "listen_addresses": [], 00:23:47.265 "allow_any_host": true, 00:23:47.265 "hosts": [] 00:23:47.265 }, 00:23:47.265 { 00:23:47.265 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.265 "subtype": "NVMe", 00:23:47.265 "listen_addresses": [ 00:23:47.265 { 00:23:47.265 "trtype": "TCP", 00:23:47.265 "adrfam": "IPv4", 00:23:47.265 "traddr": "10.0.0.2", 00:23:47.265 "trsvcid": "4420" 00:23:47.265 } 00:23:47.265 ], 00:23:47.265 "allow_any_host": true, 00:23:47.265 "hosts": [], 00:23:47.265 "serial_number": "SPDK00000000000001", 00:23:47.265 "model_number": "SPDK bdev Controller", 00:23:47.265 "max_namespaces": 2, 00:23:47.265 "min_cntlid": 1, 00:23:47.265 "max_cntlid": 65519, 00:23:47.265 "namespaces": [ 00:23:47.265 { 00:23:47.265 "nsid": 1, 00:23:47.265 "bdev_name": "Malloc0", 00:23:47.265 "name": "Malloc0", 00:23:47.265 "nguid": "DA70A0AB1947479EAF238657611FB6E7", 00:23:47.265 "uuid": "da70a0ab-1947-479e-af23-8657611fb6e7" 00:23:47.265 } 00:23:47.265 ] 00:23:47.265 } 00:23:47.265 ] 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2860506 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:47.265 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.265 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.525 Malloc1 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.525 [ 00:23:47.525 { 00:23:47.525 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:47.525 "subtype": "Discovery", 00:23:47.525 "listen_addresses": [], 00:23:47.525 "allow_any_host": true, 00:23:47.525 "hosts": [] 00:23:47.525 }, 00:23:47.525 { 00:23:47.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.525 "subtype": "NVMe", 00:23:47.525 "listen_addresses": [ 00:23:47.525 { 00:23:47.525 "trtype": "TCP", 00:23:47.525 "adrfam": "IPv4", 00:23:47.525 "traddr": "10.0.0.2", 00:23:47.525 "trsvcid": "4420" 00:23:47.525 } 00:23:47.525 ], 00:23:47.525 "allow_any_host": true, 00:23:47.525 "hosts": [], 00:23:47.525 "serial_number": "SPDK00000000000001", 00:23:47.525 "model_number": "SPDK bdev Controller", 00:23:47.525 "max_namespaces": 2, 00:23:47.525 "min_cntlid": 1, 00:23:47.525 "max_cntlid": 65519, 00:23:47.525 "namespaces": [ 00:23:47.525 { 00:23:47.525 "nsid": 1, 00:23:47.525 "bdev_name": "Malloc0", 00:23:47.525 "name": "Malloc0", 00:23:47.525 "nguid": "DA70A0AB1947479EAF238657611FB6E7", 00:23:47.525 "uuid": "da70a0ab-1947-479e-af23-8657611fb6e7" 00:23:47.525 }, 00:23:47.525 { 00:23:47.525 "nsid": 2, 00:23:47.525 "bdev_name": "Malloc1", 00:23:47.525 "name": "Malloc1", 00:23:47.525 "nguid": "22E386E3B32B4918829E875DC8D8016F", 00:23:47.525 "uuid": "22e386e3-b32b-4918-829e-875dc8d8016f" 00:23:47.525 } 00:23:47.525 ] 00:23:47.525 } 00:23:47.525 ] 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2860506 00:23:47.525 Asynchronous Event Request test 00:23:47.525 Attaching to 10.0.0.2 00:23:47.525 Attached to 10.0.0.2 00:23:47.525 Registering asynchronous event callbacks... 00:23:47.525 Starting namespace attribute notice tests for all controllers... 00:23:47.525 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:47.525 aer_cb - Changed Namespace 00:23:47.525 Cleaning up... 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:47.525 rmmod nvme_tcp 00:23:47.525 rmmod nvme_fabrics 00:23:47.525 rmmod nvme_keyring 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2860315 ']' 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2860315 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2860315 ']' 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2860315 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2860315 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2860315' 00:23:47.525 killing process with pid 2860315 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2860315 00:23:47.525 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2860315 00:23:47.821 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:47.821 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:47.821 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:47.821 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:47.821 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:47.821 22:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.821 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.822 22:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.779 22:21:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:49.779 00:23:49.779 real 0m10.845s 00:23:49.779 user 0m7.199s 00:23:49.779 sys 0m5.724s 00:23:49.779 22:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:49.779 22:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.779 ************************************ 00:23:49.779 END TEST nvmf_aer 00:23:49.779 ************************************ 00:23:49.779 22:21:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:49.779 22:21:15 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:49.779 22:21:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:49.779 22:21:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:49.779 22:21:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:49.779 ************************************ 00:23:49.779 START TEST nvmf_async_init 00:23:49.779 ************************************ 00:23:49.779 22:21:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:50.040 * Looking for test storage... 00:23:50.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=864b3dcb68674bf4acdcb3267efb1b6d 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.040 22:21:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:56.631 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:56.631 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:56.631 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:56.631 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:56.631 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.632 22:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:56.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:23:56.893 00:23:56.893 --- 10.0.0.2 ping statistics --- 00:23:56.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.893 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:23:56.893 00:23:56.893 --- 10.0.0.1 ping statistics --- 00:23:56.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.893 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:56.893 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:57.153 22:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:57.153 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:57.153 22:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:57.153 22:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.153 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2864799 00:23:57.153 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2864799 00:23:57.153 22:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:57.153 22:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2864799 ']' 00:23:57.153 22:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.153 22:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.153 22:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.153 22:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.153 22:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.153 [2024-07-15 22:21:22.302968] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:23:57.153 [2024-07-15 22:21:22.303020] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.153 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.153 [2024-07-15 22:21:22.367874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.153 [2024-07-15 22:21:22.431176] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.153 [2024-07-15 22:21:22.431213] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.153 [2024-07-15 22:21:22.431220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.153 [2024-07-15 22:21:22.431227] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.153 [2024-07-15 22:21:22.431232] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.153 [2024-07-15 22:21:22.431251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.090 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.090 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:23:58.090 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:58.090 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.090 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.090 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.090 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:58.090 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.090 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.090 [2024-07-15 22:21:23.117634] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.091 null0 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 864b3dcb68674bf4acdcb3267efb1b6d 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.091 [2024-07-15 22:21:23.157860] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.091 nvme0n1 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.091 [ 00:23:58.091 { 00:23:58.091 "name": "nvme0n1", 00:23:58.091 "aliases": [ 00:23:58.091 "864b3dcb-6867-4bf4-acdc-b3267efb1b6d" 00:23:58.091 ], 00:23:58.091 "product_name": "NVMe disk", 00:23:58.091 "block_size": 512, 00:23:58.091 "num_blocks": 2097152, 00:23:58.091 "uuid": "864b3dcb-6867-4bf4-acdc-b3267efb1b6d", 00:23:58.091 "assigned_rate_limits": { 00:23:58.091 "rw_ios_per_sec": 0, 00:23:58.091 "rw_mbytes_per_sec": 0, 00:23:58.091 "r_mbytes_per_sec": 0, 00:23:58.091 "w_mbytes_per_sec": 0 00:23:58.091 }, 00:23:58.091 "claimed": false, 00:23:58.091 "zoned": false, 00:23:58.091 "supported_io_types": { 00:23:58.091 "read": true, 00:23:58.091 "write": true, 00:23:58.091 "unmap": false, 00:23:58.091 "flush": true, 00:23:58.091 "reset": true, 00:23:58.091 "nvme_admin": true, 00:23:58.091 "nvme_io": true, 00:23:58.091 "nvme_io_md": false, 00:23:58.091 "write_zeroes": true, 00:23:58.091 "zcopy": false, 00:23:58.091 "get_zone_info": false, 00:23:58.091 "zone_management": false, 00:23:58.091 "zone_append": false, 00:23:58.091 "compare": true, 00:23:58.091 "compare_and_write": true, 00:23:58.091 "abort": true, 00:23:58.091 "seek_hole": false, 00:23:58.091 "seek_data": false, 00:23:58.091 "copy": true, 00:23:58.091 "nvme_iov_md": false 00:23:58.091 }, 00:23:58.091 "memory_domains": [ 00:23:58.091 { 00:23:58.091 "dma_device_id": "system", 00:23:58.091 "dma_device_type": 1 00:23:58.091 } 00:23:58.091 ], 00:23:58.091 "driver_specific": { 00:23:58.091 "nvme": [ 00:23:58.091 { 00:23:58.091 "trid": { 00:23:58.091 "trtype": "TCP", 00:23:58.091 "adrfam": "IPv4", 00:23:58.091 "traddr": "10.0.0.2", 00:23:58.091 "trsvcid": "4420", 00:23:58.091 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:58.091 }, 00:23:58.091 "ctrlr_data": { 00:23:58.091 "cntlid": 1, 00:23:58.091 "vendor_id": "0x8086", 00:23:58.091 "model_number": "SPDK bdev Controller", 00:23:58.091 "serial_number": "00000000000000000000", 00:23:58.091 "firmware_revision": "24.09", 00:23:58.091 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.091 "oacs": { 00:23:58.091 "security": 0, 00:23:58.091 "format": 0, 00:23:58.091 "firmware": 0, 00:23:58.091 "ns_manage": 0 00:23:58.091 }, 00:23:58.091 "multi_ctrlr": true, 00:23:58.091 "ana_reporting": false 00:23:58.091 }, 00:23:58.091 "vs": { 00:23:58.091 "nvme_version": "1.3" 00:23:58.091 }, 00:23:58.091 "ns_data": { 00:23:58.091 "id": 1, 00:23:58.091 "can_share": true 00:23:58.091 } 00:23:58.091 } 00:23:58.091 ], 00:23:58.091 "mp_policy": "active_passive" 00:23:58.091 } 00:23:58.091 } 00:23:58.091 ] 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.091 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.091 [2024-07-15 22:21:23.406326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:58.091 [2024-07-15 22:21:23.406387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2490df0 (9): Bad file descriptor 00:23:58.351 [2024-07-15 22:21:23.538222] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:58.351 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.351 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:58.351 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.351 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.351 [ 00:23:58.351 { 00:23:58.351 "name": "nvme0n1", 00:23:58.351 "aliases": [ 00:23:58.351 "864b3dcb-6867-4bf4-acdc-b3267efb1b6d" 00:23:58.351 ], 00:23:58.351 "product_name": "NVMe disk", 00:23:58.351 "block_size": 512, 00:23:58.351 "num_blocks": 2097152, 00:23:58.351 "uuid": "864b3dcb-6867-4bf4-acdc-b3267efb1b6d", 00:23:58.351 "assigned_rate_limits": { 00:23:58.351 "rw_ios_per_sec": 0, 00:23:58.351 "rw_mbytes_per_sec": 0, 00:23:58.351 "r_mbytes_per_sec": 0, 00:23:58.351 "w_mbytes_per_sec": 0 00:23:58.351 }, 00:23:58.351 "claimed": false, 00:23:58.351 "zoned": false, 00:23:58.351 "supported_io_types": { 00:23:58.351 "read": true, 00:23:58.351 "write": true, 00:23:58.351 "unmap": false, 00:23:58.351 "flush": true, 00:23:58.351 "reset": true, 00:23:58.351 "nvme_admin": true, 00:23:58.351 "nvme_io": true, 00:23:58.351 "nvme_io_md": false, 00:23:58.351 "write_zeroes": true, 00:23:58.351 "zcopy": false, 00:23:58.351 "get_zone_info": false, 00:23:58.351 "zone_management": false, 00:23:58.351 "zone_append": false, 00:23:58.351 "compare": true, 00:23:58.351 "compare_and_write": true, 00:23:58.351 "abort": true, 00:23:58.351 "seek_hole": false, 00:23:58.351 "seek_data": false, 00:23:58.351 "copy": true, 00:23:58.351 "nvme_iov_md": false 00:23:58.351 }, 00:23:58.351 "memory_domains": [ 00:23:58.351 { 00:23:58.351 "dma_device_id": "system", 00:23:58.351 "dma_device_type": 1 00:23:58.351 } 00:23:58.351 ], 00:23:58.351 "driver_specific": { 00:23:58.351 "nvme": [ 00:23:58.351 { 00:23:58.351 "trid": { 00:23:58.351 "trtype": "TCP", 00:23:58.351 "adrfam": "IPv4", 00:23:58.351 "traddr": "10.0.0.2", 00:23:58.351 "trsvcid": "4420", 00:23:58.351 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:58.351 }, 00:23:58.351 "ctrlr_data": { 00:23:58.351 "cntlid": 2, 00:23:58.351 "vendor_id": "0x8086", 00:23:58.351 "model_number": "SPDK bdev Controller", 00:23:58.351 "serial_number": "00000000000000000000", 00:23:58.351 "firmware_revision": "24.09", 00:23:58.351 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.351 "oacs": { 00:23:58.351 "security": 0, 00:23:58.351 "format": 0, 00:23:58.351 "firmware": 0, 00:23:58.351 "ns_manage": 0 00:23:58.351 }, 00:23:58.351 "multi_ctrlr": true, 00:23:58.351 "ana_reporting": false 00:23:58.351 }, 00:23:58.351 "vs": { 00:23:58.351 "nvme_version": "1.3" 00:23:58.351 }, 00:23:58.351 "ns_data": { 00:23:58.351 "id": 1, 00:23:58.351 "can_share": true 00:23:58.351 } 00:23:58.351 } 00:23:58.351 ], 00:23:58.351 "mp_policy": "active_passive" 00:23:58.351 } 00:23:58.351 } 00:23:58.351 ] 00:23:58.351 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.351 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.351 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.351 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.351 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.351 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:58.351 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.0j1a7nVC6y 00:23:58.351 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:58.351 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.0j1a7nVC6y 00:23:58.351 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:58.351 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.351 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.352 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.352 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:58.352 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.352 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.352 [2024-07-15 22:21:23.590919] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.352 [2024-07-15 22:21:23.591030] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:58.352 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.352 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0j1a7nVC6y 00:23:58.352 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.352 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.352 [2024-07-15 22:21:23.598934] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:58.352 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.352 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0j1a7nVC6y 00:23:58.352 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.352 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.352 [2024-07-15 22:21:23.606972] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.352 [2024-07-15 22:21:23.607007] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:58.352 nvme0n1 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.612 [ 00:23:58.612 { 00:23:58.612 "name": "nvme0n1", 00:23:58.612 "aliases": [ 00:23:58.612 "864b3dcb-6867-4bf4-acdc-b3267efb1b6d" 00:23:58.612 ], 00:23:58.612 "product_name": "NVMe disk", 00:23:58.612 "block_size": 512, 00:23:58.612 "num_blocks": 2097152, 00:23:58.612 "uuid": "864b3dcb-6867-4bf4-acdc-b3267efb1b6d", 00:23:58.612 "assigned_rate_limits": { 00:23:58.612 "rw_ios_per_sec": 0, 00:23:58.612 "rw_mbytes_per_sec": 0, 00:23:58.612 "r_mbytes_per_sec": 0, 00:23:58.612 "w_mbytes_per_sec": 0 00:23:58.612 }, 00:23:58.612 "claimed": false, 00:23:58.612 "zoned": false, 00:23:58.612 "supported_io_types": { 00:23:58.612 "read": true, 00:23:58.612 "write": true, 00:23:58.612 "unmap": false, 00:23:58.612 "flush": true, 00:23:58.612 "reset": true, 00:23:58.612 "nvme_admin": true, 00:23:58.612 "nvme_io": true, 00:23:58.612 "nvme_io_md": false, 00:23:58.612 "write_zeroes": true, 00:23:58.612 "zcopy": false, 00:23:58.612 "get_zone_info": false, 00:23:58.612 "zone_management": false, 00:23:58.612 "zone_append": false, 00:23:58.612 "compare": true, 00:23:58.612 "compare_and_write": true, 00:23:58.612 "abort": true, 00:23:58.612 "seek_hole": false, 00:23:58.612 "seek_data": false, 00:23:58.612 "copy": true, 00:23:58.612 "nvme_iov_md": false 00:23:58.612 }, 00:23:58.612 "memory_domains": [ 00:23:58.612 { 00:23:58.612 "dma_device_id": "system", 00:23:58.612 "dma_device_type": 1 00:23:58.612 } 00:23:58.612 ], 00:23:58.612 "driver_specific": { 00:23:58.612 "nvme": [ 00:23:58.612 { 00:23:58.612 "trid": { 00:23:58.612 "trtype": "TCP", 00:23:58.612 "adrfam": "IPv4", 00:23:58.612 "traddr": "10.0.0.2", 00:23:58.612 "trsvcid": "4421", 00:23:58.612 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:58.612 }, 00:23:58.612 "ctrlr_data": { 00:23:58.612 "cntlid": 3, 00:23:58.612 "vendor_id": "0x8086", 00:23:58.612 "model_number": "SPDK bdev Controller", 00:23:58.612 "serial_number": "00000000000000000000", 00:23:58.612 "firmware_revision": "24.09", 00:23:58.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.612 "oacs": { 00:23:58.612 "security": 0, 00:23:58.612 "format": 0, 00:23:58.612 "firmware": 0, 00:23:58.612 "ns_manage": 0 00:23:58.612 }, 00:23:58.612 "multi_ctrlr": true, 00:23:58.612 "ana_reporting": false 00:23:58.612 }, 00:23:58.612 "vs": { 00:23:58.612 "nvme_version": "1.3" 00:23:58.612 }, 00:23:58.612 "ns_data": { 00:23:58.612 "id": 1, 00:23:58.612 "can_share": true 00:23:58.612 } 00:23:58.612 } 00:23:58.612 ], 00:23:58.612 "mp_policy": "active_passive" 00:23:58.612 } 00:23:58.612 } 00:23:58.612 ] 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.0j1a7nVC6y 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:58.612 rmmod nvme_tcp 00:23:58.612 rmmod nvme_fabrics 00:23:58.612 rmmod nvme_keyring 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2864799 ']' 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2864799 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2864799 ']' 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2864799 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2864799 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2864799' 00:23:58.612 killing process with pid 2864799 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2864799 00:23:58.612 [2024-07-15 22:21:23.826740] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:58.612 [2024-07-15 22:21:23.826768] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:58.612 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2864799 00:23:58.872 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:58.872 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:58.872 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:58.872 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:58.872 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:58.872 22:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.872 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.872 22:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.783 22:21:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:00.783 00:24:00.783 real 0m10.925s 00:24:00.783 user 0m3.841s 00:24:00.783 sys 0m5.500s 00:24:00.783 22:21:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:00.783 22:21:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:00.783 ************************************ 00:24:00.783 END TEST nvmf_async_init 00:24:00.783 ************************************ 00:24:00.783 22:21:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:00.783 22:21:26 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:00.783 22:21:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:00.783 22:21:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:00.783 22:21:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:00.783 ************************************ 00:24:00.783 START TEST dma 00:24:00.783 ************************************ 00:24:00.783 22:21:26 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:01.044 * Looking for test storage... 00:24:01.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:01.044 22:21:26 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.044 22:21:26 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.044 22:21:26 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.044 22:21:26 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.044 22:21:26 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.044 22:21:26 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.044 22:21:26 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.044 22:21:26 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:01.044 22:21:26 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.044 22:21:26 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.044 22:21:26 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:01.044 22:21:26 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:01.044 00:24:01.044 real 0m0.134s 00:24:01.044 user 0m0.067s 00:24:01.044 sys 0m0.076s 00:24:01.044 22:21:26 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:01.044 22:21:26 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:01.044 ************************************ 00:24:01.044 END TEST dma 00:24:01.044 ************************************ 00:24:01.044 22:21:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:01.044 22:21:26 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:01.044 22:21:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:01.044 22:21:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.044 22:21:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:01.044 ************************************ 00:24:01.044 START TEST nvmf_identify 00:24:01.044 ************************************ 00:24:01.044 22:21:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:01.304 * Looking for test storage... 00:24:01.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.304 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:01.305 22:21:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:09.441 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:09.441 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.441 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:09.442 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:09.442 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:09.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:24:09.442 00:24:09.442 --- 10.0.0.2 ping statistics --- 00:24:09.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.442 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:24:09.442 00:24:09.442 --- 10.0.0.1 ping statistics --- 00:24:09.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.442 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2869216 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2869216 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2869216 ']' 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.442 22:21:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.442 [2024-07-15 22:21:33.666939] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:24:09.442 [2024-07-15 22:21:33.667013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.442 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.442 [2024-07-15 22:21:33.738066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:09.442 [2024-07-15 22:21:33.814088] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.442 [2024-07-15 22:21:33.814130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.442 [2024-07-15 22:21:33.814138] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.442 [2024-07-15 22:21:33.814144] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.442 [2024-07-15 22:21:33.814150] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.442 [2024-07-15 22:21:33.814323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.442 [2024-07-15 22:21:33.814440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.442 [2024-07-15 22:21:33.814596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.442 [2024-07-15 22:21:33.814597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.442 [2024-07-15 22:21:34.439605] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.442 Malloc0 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.442 [2024-07-15 22:21:34.539209] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.442 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.442 [ 00:24:09.442 { 00:24:09.442 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:09.442 "subtype": "Discovery", 00:24:09.442 "listen_addresses": [ 00:24:09.442 { 00:24:09.442 "trtype": "TCP", 00:24:09.442 "adrfam": "IPv4", 00:24:09.443 "traddr": "10.0.0.2", 00:24:09.443 "trsvcid": "4420" 00:24:09.443 } 00:24:09.443 ], 00:24:09.443 "allow_any_host": true, 00:24:09.443 "hosts": [] 00:24:09.443 }, 00:24:09.443 { 00:24:09.443 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.443 "subtype": "NVMe", 00:24:09.443 "listen_addresses": [ 00:24:09.443 { 00:24:09.443 "trtype": "TCP", 00:24:09.443 "adrfam": "IPv4", 00:24:09.443 "traddr": "10.0.0.2", 00:24:09.443 "trsvcid": "4420" 00:24:09.443 } 00:24:09.443 ], 00:24:09.443 "allow_any_host": true, 00:24:09.443 "hosts": [], 00:24:09.443 "serial_number": "SPDK00000000000001", 00:24:09.443 "model_number": "SPDK bdev Controller", 00:24:09.443 "max_namespaces": 32, 00:24:09.443 "min_cntlid": 1, 00:24:09.443 "max_cntlid": 65519, 00:24:09.443 "namespaces": [ 00:24:09.443 { 00:24:09.443 "nsid": 1, 00:24:09.443 "bdev_name": "Malloc0", 00:24:09.443 "name": "Malloc0", 00:24:09.443 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:09.443 "eui64": "ABCDEF0123456789", 00:24:09.443 "uuid": "5e96435b-4026-4d97-a4dd-5892204428c8" 00:24:09.443 } 00:24:09.443 ] 00:24:09.443 } 00:24:09.443 ] 00:24:09.443 22:21:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.443 22:21:34 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:09.443 [2024-07-15 22:21:34.600879] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:24:09.443 [2024-07-15 22:21:34.600921] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2869466 ] 00:24:09.443 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.443 [2024-07-15 22:21:34.631729] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:09.443 [2024-07-15 22:21:34.631773] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:09.443 [2024-07-15 22:21:34.631778] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:09.443 [2024-07-15 22:21:34.631790] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:09.443 [2024-07-15 22:21:34.631796] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:09.443 [2024-07-15 22:21:34.635162] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:09.443 [2024-07-15 22:21:34.635193] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1dd3ec0 0 00:24:09.443 [2024-07-15 22:21:34.643135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:09.443 [2024-07-15 22:21:34.643147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:09.443 [2024-07-15 22:21:34.643152] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:09.443 [2024-07-15 22:21:34.643155] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:09.443 [2024-07-15 22:21:34.643189] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.643195] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.643199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd3ec0) 00:24:09.443 [2024-07-15 22:21:34.643211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:09.443 [2024-07-15 22:21:34.643228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e56e40, cid 0, qid 0 00:24:09.443 [2024-07-15 22:21:34.651133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.443 [2024-07-15 22:21:34.651146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.443 [2024-07-15 22:21:34.651150] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.651154] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e56e40) on tqpair=0x1dd3ec0 00:24:09.443 [2024-07-15 22:21:34.651166] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:09.443 [2024-07-15 22:21:34.651173] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:09.443 [2024-07-15 22:21:34.651178] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:09.443 [2024-07-15 22:21:34.651190] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.651195] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.651198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd3ec0) 00:24:09.443 [2024-07-15 22:21:34.651205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.443 [2024-07-15 22:21:34.651218] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e56e40, cid 0, qid 0 00:24:09.443 [2024-07-15 22:21:34.651364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.443 [2024-07-15 22:21:34.651370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.443 [2024-07-15 22:21:34.651374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.651378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e56e40) on tqpair=0x1dd3ec0 00:24:09.443 [2024-07-15 22:21:34.651383] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:09.443 [2024-07-15 22:21:34.651390] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:09.443 [2024-07-15 22:21:34.651396] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.651400] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.651403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd3ec0) 00:24:09.443 [2024-07-15 22:21:34.651410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.443 [2024-07-15 22:21:34.651421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e56e40, cid 0, qid 0 00:24:09.443 [2024-07-15 22:21:34.651558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.443 [2024-07-15 22:21:34.651564] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.443 [2024-07-15 22:21:34.651568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.651572] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e56e40) on tqpair=0x1dd3ec0 00:24:09.443 [2024-07-15 22:21:34.651577] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:09.443 [2024-07-15 22:21:34.651584] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:09.443 [2024-07-15 22:21:34.651590] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.651594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.651597] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd3ec0) 00:24:09.443 [2024-07-15 22:21:34.651604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.443 [2024-07-15 22:21:34.651614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e56e40, cid 0, qid 0 00:24:09.443 [2024-07-15 22:21:34.651758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.443 [2024-07-15 22:21:34.651768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.443 [2024-07-15 22:21:34.651771] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.651775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e56e40) on tqpair=0x1dd3ec0 00:24:09.443 [2024-07-15 22:21:34.651780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:09.443 [2024-07-15 22:21:34.651789] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.651793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.651796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd3ec0) 00:24:09.443 [2024-07-15 22:21:34.651803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.443 [2024-07-15 22:21:34.651813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e56e40, cid 0, qid 0 00:24:09.443 [2024-07-15 22:21:34.651910] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.443 [2024-07-15 22:21:34.651916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.443 [2024-07-15 22:21:34.651919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.651923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e56e40) on tqpair=0x1dd3ec0 00:24:09.443 [2024-07-15 22:21:34.651928] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:09.443 [2024-07-15 22:21:34.651932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:09.443 [2024-07-15 22:21:34.651940] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:09.443 [2024-07-15 22:21:34.652045] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:09.443 [2024-07-15 22:21:34.652049] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:09.443 [2024-07-15 22:21:34.652057] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.652061] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.652064] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd3ec0) 00:24:09.443 [2024-07-15 22:21:34.652071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.443 [2024-07-15 22:21:34.652081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e56e40, cid 0, qid 0 00:24:09.443 [2024-07-15 22:21:34.652229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.443 [2024-07-15 22:21:34.652236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.443 [2024-07-15 22:21:34.652240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.652243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e56e40) on tqpair=0x1dd3ec0 00:24:09.443 [2024-07-15 22:21:34.652248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:09.443 [2024-07-15 22:21:34.652257] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.652261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.443 [2024-07-15 22:21:34.652264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd3ec0) 00:24:09.444 [2024-07-15 22:21:34.652271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.444 [2024-07-15 22:21:34.652281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e56e40, cid 0, qid 0 00:24:09.444 [2024-07-15 22:21:34.652379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.444 [2024-07-15 22:21:34.652385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.444 [2024-07-15 22:21:34.652388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.652392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e56e40) on tqpair=0x1dd3ec0 00:24:09.444 [2024-07-15 22:21:34.652396] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:09.444 [2024-07-15 22:21:34.652401] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:09.444 [2024-07-15 22:21:34.652409] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:09.444 [2024-07-15 22:21:34.652416] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:09.444 [2024-07-15 22:21:34.652425] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.652429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd3ec0) 00:24:09.444 [2024-07-15 22:21:34.652436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.444 [2024-07-15 22:21:34.652446] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e56e40, cid 0, qid 0 00:24:09.444 [2024-07-15 22:21:34.652570] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.444 [2024-07-15 22:21:34.652577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.444 [2024-07-15 22:21:34.652580] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.652584] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd3ec0): datao=0, datal=4096, cccid=0 00:24:09.444 [2024-07-15 22:21:34.652589] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e56e40) on tqpair(0x1dd3ec0): expected_datao=0, payload_size=4096 00:24:09.444 [2024-07-15 22:21:34.652593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.652704] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.652709] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.693338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.444 [2024-07-15 22:21:34.693348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.444 [2024-07-15 22:21:34.693351] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.693355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e56e40) on tqpair=0x1dd3ec0 00:24:09.444 [2024-07-15 22:21:34.693363] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:09.444 [2024-07-15 22:21:34.693371] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:09.444 [2024-07-15 22:21:34.693375] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:09.444 [2024-07-15 22:21:34.693381] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:09.444 [2024-07-15 22:21:34.693385] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:09.444 [2024-07-15 22:21:34.693390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:09.444 [2024-07-15 22:21:34.693398] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:09.444 [2024-07-15 22:21:34.693405] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.693412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.693415] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd3ec0) 00:24:09.444 [2024-07-15 22:21:34.693423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:09.444 [2024-07-15 22:21:34.693435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e56e40, cid 0, qid 0 00:24:09.444 [2024-07-15 22:21:34.693543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.444 [2024-07-15 22:21:34.693549] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.444 [2024-07-15 22:21:34.693553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.693556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e56e40) on tqpair=0x1dd3ec0 00:24:09.444 [2024-07-15 22:21:34.693564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.693567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.693571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd3ec0) 00:24:09.444 [2024-07-15 22:21:34.693577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.444 [2024-07-15 22:21:34.693583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.693587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.693590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1dd3ec0) 00:24:09.444 [2024-07-15 22:21:34.693596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.444 [2024-07-15 22:21:34.693602] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.693606] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.693609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1dd3ec0) 00:24:09.444 [2024-07-15 22:21:34.693615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.444 [2024-07-15 22:21:34.693621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.693625] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.693628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.444 [2024-07-15 22:21:34.693634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.444 [2024-07-15 22:21:34.693638] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:09.444 [2024-07-15 22:21:34.693649] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:09.444 [2024-07-15 22:21:34.693656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.693659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd3ec0) 00:24:09.444 [2024-07-15 22:21:34.693666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.444 [2024-07-15 22:21:34.693678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e56e40, cid 0, qid 0 00:24:09.444 [2024-07-15 22:21:34.693683] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e56fc0, cid 1, qid 0 00:24:09.444 [2024-07-15 22:21:34.693688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e57140, cid 2, qid 0 00:24:09.444 [2024-07-15 22:21:34.693693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.444 [2024-07-15 22:21:34.693700] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e57440, cid 4, qid 0 00:24:09.444 [2024-07-15 22:21:34.693851] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.444 [2024-07-15 22:21:34.693857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.444 [2024-07-15 22:21:34.693861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.693865] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e57440) on tqpair=0x1dd3ec0 00:24:09.444 [2024-07-15 22:21:34.693870] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:09.444 [2024-07-15 22:21:34.693875] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:09.444 [2024-07-15 22:21:34.693886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.693889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd3ec0) 00:24:09.444 [2024-07-15 22:21:34.693896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.444 [2024-07-15 22:21:34.693906] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e57440, cid 4, qid 0 00:24:09.444 [2024-07-15 22:21:34.694053] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.444 [2024-07-15 22:21:34.694059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.444 [2024-07-15 22:21:34.694062] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.694066] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd3ec0): datao=0, datal=4096, cccid=4 00:24:09.444 [2024-07-15 22:21:34.694071] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e57440) on tqpair(0x1dd3ec0): expected_datao=0, payload_size=4096 00:24:09.444 [2024-07-15 22:21:34.694075] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.694082] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.694085] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.694146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.444 [2024-07-15 22:21:34.694153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.444 [2024-07-15 22:21:34.694156] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.694160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e57440) on tqpair=0x1dd3ec0 00:24:09.444 [2024-07-15 22:21:34.694171] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:09.444 [2024-07-15 22:21:34.694192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.694197] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd3ec0) 00:24:09.444 [2024-07-15 22:21:34.694203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.444 [2024-07-15 22:21:34.694210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.694214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.694217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1dd3ec0) 00:24:09.444 [2024-07-15 22:21:34.694223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.444 [2024-07-15 22:21:34.694237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e57440, cid 4, qid 0 00:24:09.444 [2024-07-15 22:21:34.694242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e575c0, cid 5, qid 0 00:24:09.444 [2024-07-15 22:21:34.694405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.444 [2024-07-15 22:21:34.694412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.444 [2024-07-15 22:21:34.694418] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.444 [2024-07-15 22:21:34.694421] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd3ec0): datao=0, datal=1024, cccid=4 00:24:09.445 [2024-07-15 22:21:34.694426] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e57440) on tqpair(0x1dd3ec0): expected_datao=0, payload_size=1024 00:24:09.445 [2024-07-15 22:21:34.694430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.445 [2024-07-15 22:21:34.694436] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.445 [2024-07-15 22:21:34.694440] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.445 [2024-07-15 22:21:34.694445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.445 [2024-07-15 22:21:34.694451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.445 [2024-07-15 22:21:34.694454] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.445 [2024-07-15 22:21:34.694458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e575c0) on tqpair=0x1dd3ec0 00:24:09.445 [2024-07-15 22:21:34.739130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.445 [2024-07-15 22:21:34.739140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.445 [2024-07-15 22:21:34.739144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.445 [2024-07-15 22:21:34.739147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e57440) on tqpair=0x1dd3ec0 00:24:09.445 [2024-07-15 22:21:34.739165] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.445 [2024-07-15 22:21:34.739169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd3ec0) 00:24:09.445 [2024-07-15 22:21:34.739176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.445 [2024-07-15 22:21:34.739192] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e57440, cid 4, qid 0 00:24:09.445 [2024-07-15 22:21:34.739338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.445 [2024-07-15 22:21:34.739345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.445 [2024-07-15 22:21:34.739349] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.445 [2024-07-15 22:21:34.739352] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd3ec0): datao=0, datal=3072, cccid=4 00:24:09.445 [2024-07-15 22:21:34.739357] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e57440) on tqpair(0x1dd3ec0): expected_datao=0, payload_size=3072 00:24:09.445 [2024-07-15 22:21:34.739361] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.445 [2024-07-15 22:21:34.739368] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.445 [2024-07-15 22:21:34.739372] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.708 [2024-07-15 22:21:34.782132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.708 [2024-07-15 22:21:34.782141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.708 [2024-07-15 22:21:34.782145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.708 [2024-07-15 22:21:34.782149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e57440) on tqpair=0x1dd3ec0 00:24:09.708 [2024-07-15 22:21:34.782159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.708 [2024-07-15 22:21:34.782162] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd3ec0) 00:24:09.709 [2024-07-15 22:21:34.782169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.709 [2024-07-15 22:21:34.782184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e57440, cid 4, qid 0 00:24:09.709 [2024-07-15 22:21:34.782323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.709 [2024-07-15 22:21:34.782330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.709 [2024-07-15 22:21:34.782333] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.709 [2024-07-15 22:21:34.782340] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd3ec0): datao=0, datal=8, cccid=4 00:24:09.709 [2024-07-15 22:21:34.782344] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e57440) on tqpair(0x1dd3ec0): expected_datao=0, payload_size=8 00:24:09.709 [2024-07-15 22:21:34.782349] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.709 [2024-07-15 22:21:34.782355] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.709 [2024-07-15 22:21:34.782359] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.709 [2024-07-15 22:21:34.823224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.709 [2024-07-15 22:21:34.823236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.709 [2024-07-15 22:21:34.823239] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.709 [2024-07-15 22:21:34.823243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e57440) on tqpair=0x1dd3ec0 00:24:09.709 ===================================================== 00:24:09.709 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:09.709 ===================================================== 00:24:09.709 Controller Capabilities/Features 00:24:09.709 ================================ 00:24:09.709 Vendor ID: 0000 00:24:09.709 Subsystem Vendor ID: 0000 00:24:09.709 Serial Number: .................... 00:24:09.709 Model Number: ........................................ 00:24:09.709 Firmware Version: 24.09 00:24:09.709 Recommended Arb Burst: 0 00:24:09.709 IEEE OUI Identifier: 00 00 00 00:24:09.709 Multi-path I/O 00:24:09.709 May have multiple subsystem ports: No 00:24:09.709 May have multiple controllers: No 00:24:09.709 Associated with SR-IOV VF: No 00:24:09.709 Max Data Transfer Size: 131072 00:24:09.709 Max Number of Namespaces: 0 00:24:09.709 Max Number of I/O Queues: 1024 00:24:09.709 NVMe Specification Version (VS): 1.3 00:24:09.709 NVMe Specification Version (Identify): 1.3 00:24:09.709 Maximum Queue Entries: 128 00:24:09.709 Contiguous Queues Required: Yes 00:24:09.709 Arbitration Mechanisms Supported 00:24:09.709 Weighted Round Robin: Not Supported 00:24:09.709 Vendor Specific: Not Supported 00:24:09.709 Reset Timeout: 15000 ms 00:24:09.709 Doorbell Stride: 4 bytes 00:24:09.709 NVM Subsystem Reset: Not Supported 00:24:09.709 Command Sets Supported 00:24:09.709 NVM Command Set: Supported 00:24:09.709 Boot Partition: Not Supported 00:24:09.709 Memory Page Size Minimum: 4096 bytes 00:24:09.709 Memory Page Size Maximum: 4096 bytes 00:24:09.709 Persistent Memory Region: Not Supported 00:24:09.709 Optional Asynchronous Events Supported 00:24:09.709 Namespace Attribute Notices: Not Supported 00:24:09.709 Firmware Activation Notices: Not Supported 00:24:09.709 ANA Change Notices: Not Supported 00:24:09.709 PLE Aggregate Log Change Notices: Not Supported 00:24:09.709 LBA Status Info Alert Notices: Not Supported 00:24:09.709 EGE Aggregate Log Change Notices: Not Supported 00:24:09.709 Normal NVM Subsystem Shutdown event: Not Supported 00:24:09.709 Zone Descriptor Change Notices: Not Supported 00:24:09.709 Discovery Log Change Notices: Supported 00:24:09.709 Controller Attributes 00:24:09.709 128-bit Host Identifier: Not Supported 00:24:09.709 Non-Operational Permissive Mode: Not Supported 00:24:09.709 NVM Sets: Not Supported 00:24:09.709 Read Recovery Levels: Not Supported 00:24:09.709 Endurance Groups: Not Supported 00:24:09.709 Predictable Latency Mode: Not Supported 00:24:09.709 Traffic Based Keep ALive: Not Supported 00:24:09.709 Namespace Granularity: Not Supported 00:24:09.709 SQ Associations: Not Supported 00:24:09.709 UUID List: Not Supported 00:24:09.709 Multi-Domain Subsystem: Not Supported 00:24:09.709 Fixed Capacity Management: Not Supported 00:24:09.709 Variable Capacity Management: Not Supported 00:24:09.709 Delete Endurance Group: Not Supported 00:24:09.709 Delete NVM Set: Not Supported 00:24:09.709 Extended LBA Formats Supported: Not Supported 00:24:09.709 Flexible Data Placement Supported: Not Supported 00:24:09.709 00:24:09.709 Controller Memory Buffer Support 00:24:09.709 ================================ 00:24:09.709 Supported: No 00:24:09.709 00:24:09.709 Persistent Memory Region Support 00:24:09.709 ================================ 00:24:09.709 Supported: No 00:24:09.709 00:24:09.709 Admin Command Set Attributes 00:24:09.709 ============================ 00:24:09.709 Security Send/Receive: Not Supported 00:24:09.709 Format NVM: Not Supported 00:24:09.709 Firmware Activate/Download: Not Supported 00:24:09.709 Namespace Management: Not Supported 00:24:09.709 Device Self-Test: Not Supported 00:24:09.709 Directives: Not Supported 00:24:09.709 NVMe-MI: Not Supported 00:24:09.709 Virtualization Management: Not Supported 00:24:09.709 Doorbell Buffer Config: Not Supported 00:24:09.709 Get LBA Status Capability: Not Supported 00:24:09.709 Command & Feature Lockdown Capability: Not Supported 00:24:09.709 Abort Command Limit: 1 00:24:09.709 Async Event Request Limit: 4 00:24:09.709 Number of Firmware Slots: N/A 00:24:09.709 Firmware Slot 1 Read-Only: N/A 00:24:09.709 Firmware Activation Without Reset: N/A 00:24:09.709 Multiple Update Detection Support: N/A 00:24:09.709 Firmware Update Granularity: No Information Provided 00:24:09.709 Per-Namespace SMART Log: No 00:24:09.709 Asymmetric Namespace Access Log Page: Not Supported 00:24:09.709 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:09.709 Command Effects Log Page: Not Supported 00:24:09.709 Get Log Page Extended Data: Supported 00:24:09.709 Telemetry Log Pages: Not Supported 00:24:09.709 Persistent Event Log Pages: Not Supported 00:24:09.709 Supported Log Pages Log Page: May Support 00:24:09.709 Commands Supported & Effects Log Page: Not Supported 00:24:09.709 Feature Identifiers & Effects Log Page:May Support 00:24:09.709 NVMe-MI Commands & Effects Log Page: May Support 00:24:09.709 Data Area 4 for Telemetry Log: Not Supported 00:24:09.709 Error Log Page Entries Supported: 128 00:24:09.709 Keep Alive: Not Supported 00:24:09.709 00:24:09.709 NVM Command Set Attributes 00:24:09.709 ========================== 00:24:09.709 Submission Queue Entry Size 00:24:09.709 Max: 1 00:24:09.709 Min: 1 00:24:09.709 Completion Queue Entry Size 00:24:09.709 Max: 1 00:24:09.709 Min: 1 00:24:09.709 Number of Namespaces: 0 00:24:09.709 Compare Command: Not Supported 00:24:09.709 Write Uncorrectable Command: Not Supported 00:24:09.709 Dataset Management Command: Not Supported 00:24:09.709 Write Zeroes Command: Not Supported 00:24:09.709 Set Features Save Field: Not Supported 00:24:09.709 Reservations: Not Supported 00:24:09.709 Timestamp: Not Supported 00:24:09.709 Copy: Not Supported 00:24:09.709 Volatile Write Cache: Not Present 00:24:09.709 Atomic Write Unit (Normal): 1 00:24:09.709 Atomic Write Unit (PFail): 1 00:24:09.709 Atomic Compare & Write Unit: 1 00:24:09.709 Fused Compare & Write: Supported 00:24:09.709 Scatter-Gather List 00:24:09.709 SGL Command Set: Supported 00:24:09.709 SGL Keyed: Supported 00:24:09.709 SGL Bit Bucket Descriptor: Not Supported 00:24:09.709 SGL Metadata Pointer: Not Supported 00:24:09.709 Oversized SGL: Not Supported 00:24:09.709 SGL Metadata Address: Not Supported 00:24:09.709 SGL Offset: Supported 00:24:09.709 Transport SGL Data Block: Not Supported 00:24:09.709 Replay Protected Memory Block: Not Supported 00:24:09.709 00:24:09.709 Firmware Slot Information 00:24:09.709 ========================= 00:24:09.709 Active slot: 0 00:24:09.709 00:24:09.709 00:24:09.709 Error Log 00:24:09.709 ========= 00:24:09.709 00:24:09.709 Active Namespaces 00:24:09.709 ================= 00:24:09.709 Discovery Log Page 00:24:09.709 ================== 00:24:09.709 Generation Counter: 2 00:24:09.709 Number of Records: 2 00:24:09.709 Record Format: 0 00:24:09.709 00:24:09.709 Discovery Log Entry 0 00:24:09.709 ---------------------- 00:24:09.709 Transport Type: 3 (TCP) 00:24:09.709 Address Family: 1 (IPv4) 00:24:09.709 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:09.709 Entry Flags: 00:24:09.709 Duplicate Returned Information: 1 00:24:09.709 Explicit Persistent Connection Support for Discovery: 1 00:24:09.709 Transport Requirements: 00:24:09.709 Secure Channel: Not Required 00:24:09.709 Port ID: 0 (0x0000) 00:24:09.709 Controller ID: 65535 (0xffff) 00:24:09.709 Admin Max SQ Size: 128 00:24:09.709 Transport Service Identifier: 4420 00:24:09.709 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:09.709 Transport Address: 10.0.0.2 00:24:09.709 Discovery Log Entry 1 00:24:09.709 ---------------------- 00:24:09.709 Transport Type: 3 (TCP) 00:24:09.709 Address Family: 1 (IPv4) 00:24:09.709 Subsystem Type: 2 (NVM Subsystem) 00:24:09.709 Entry Flags: 00:24:09.709 Duplicate Returned Information: 0 00:24:09.709 Explicit Persistent Connection Support for Discovery: 0 00:24:09.710 Transport Requirements: 00:24:09.710 Secure Channel: Not Required 00:24:09.710 Port ID: 0 (0x0000) 00:24:09.710 Controller ID: 65535 (0xffff) 00:24:09.710 Admin Max SQ Size: 128 00:24:09.710 Transport Service Identifier: 4420 00:24:09.710 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:09.710 Transport Address: 10.0.0.2 [2024-07-15 22:21:34.823328] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:09.710 [2024-07-15 22:21:34.823339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e56e40) on tqpair=0x1dd3ec0 00:24:09.710 [2024-07-15 22:21:34.823346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.710 [2024-07-15 22:21:34.823351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e56fc0) on tqpair=0x1dd3ec0 00:24:09.710 [2024-07-15 22:21:34.823356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.710 [2024-07-15 22:21:34.823361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e57140) on tqpair=0x1dd3ec0 00:24:09.710 [2024-07-15 22:21:34.823365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.710 [2024-07-15 22:21:34.823370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.710 [2024-07-15 22:21:34.823375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.710 [2024-07-15 22:21:34.823386] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.823390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.823393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.710 [2024-07-15 22:21:34.823401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.710 [2024-07-15 22:21:34.823415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.710 [2024-07-15 22:21:34.823569] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.710 [2024-07-15 22:21:34.823576] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.710 [2024-07-15 22:21:34.823580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.823583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.710 [2024-07-15 22:21:34.823590] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.823594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.823598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.710 [2024-07-15 22:21:34.823604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.710 [2024-07-15 22:21:34.823617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.710 [2024-07-15 22:21:34.823766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.710 [2024-07-15 22:21:34.823773] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.710 [2024-07-15 22:21:34.823776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.823783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.710 [2024-07-15 22:21:34.823788] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:09.710 [2024-07-15 22:21:34.823792] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:09.710 [2024-07-15 22:21:34.823802] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.823805] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.823809] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.710 [2024-07-15 22:21:34.823816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.710 [2024-07-15 22:21:34.823826] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.710 [2024-07-15 22:21:34.823968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.710 [2024-07-15 22:21:34.823974] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.710 [2024-07-15 22:21:34.823977] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.823981] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.710 [2024-07-15 22:21:34.823991] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.823995] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.823998] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.710 [2024-07-15 22:21:34.824005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.710 [2024-07-15 22:21:34.824015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.710 [2024-07-15 22:21:34.824108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.710 [2024-07-15 22:21:34.824114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.710 [2024-07-15 22:21:34.824117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.710 [2024-07-15 22:21:34.824140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824144] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824147] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.710 [2024-07-15 22:21:34.824154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.710 [2024-07-15 22:21:34.824164] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.710 [2024-07-15 22:21:34.824271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.710 [2024-07-15 22:21:34.824278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.710 [2024-07-15 22:21:34.824281] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.710 [2024-07-15 22:21:34.824295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.710 [2024-07-15 22:21:34.824309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.710 [2024-07-15 22:21:34.824319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.710 [2024-07-15 22:21:34.824421] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.710 [2024-07-15 22:21:34.824427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.710 [2024-07-15 22:21:34.824431] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.710 [2024-07-15 22:21:34.824444] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.710 [2024-07-15 22:21:34.824458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.710 [2024-07-15 22:21:34.824468] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.710 [2024-07-15 22:21:34.824573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.710 [2024-07-15 22:21:34.824579] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.710 [2024-07-15 22:21:34.824582] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824586] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.710 [2024-07-15 22:21:34.824595] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824603] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.710 [2024-07-15 22:21:34.824609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.710 [2024-07-15 22:21:34.824619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.710 [2024-07-15 22:21:34.824709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.710 [2024-07-15 22:21:34.824715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.710 [2024-07-15 22:21:34.824719] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824723] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.710 [2024-07-15 22:21:34.824732] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824736] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.710 [2024-07-15 22:21:34.824746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.710 [2024-07-15 22:21:34.824755] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.710 [2024-07-15 22:21:34.824876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.710 [2024-07-15 22:21:34.824882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.710 [2024-07-15 22:21:34.824885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824889] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.710 [2024-07-15 22:21:34.824899] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824903] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.824906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.710 [2024-07-15 22:21:34.824913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.710 [2024-07-15 22:21:34.824922] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.710 [2024-07-15 22:21:34.825129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.710 [2024-07-15 22:21:34.825135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.710 [2024-07-15 22:21:34.825141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.825145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.710 [2024-07-15 22:21:34.825154] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.825158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.710 [2024-07-15 22:21:34.825162] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.710 [2024-07-15 22:21:34.825168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.710 [2024-07-15 22:21:34.825178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.711 [2024-07-15 22:21:34.825318] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.711 [2024-07-15 22:21:34.825324] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.711 [2024-07-15 22:21:34.825328] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.825332] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.711 [2024-07-15 22:21:34.825341] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.825344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.825348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.711 [2024-07-15 22:21:34.825354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.711 [2024-07-15 22:21:34.825364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.711 [2024-07-15 22:21:34.825466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.711 [2024-07-15 22:21:34.825473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.711 [2024-07-15 22:21:34.825476] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.825480] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.711 [2024-07-15 22:21:34.825489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.825493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.825496] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.711 [2024-07-15 22:21:34.825503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.711 [2024-07-15 22:21:34.825513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.711 [2024-07-15 22:21:34.825618] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.711 [2024-07-15 22:21:34.825625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.711 [2024-07-15 22:21:34.825628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.825632] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.711 [2024-07-15 22:21:34.825641] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.825645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.825649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.711 [2024-07-15 22:21:34.825655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.711 [2024-07-15 22:21:34.825665] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.711 [2024-07-15 22:21:34.825770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.711 [2024-07-15 22:21:34.825776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.711 [2024-07-15 22:21:34.825780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.825787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.711 [2024-07-15 22:21:34.825797] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.825801] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.825804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.711 [2024-07-15 22:21:34.825811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.711 [2024-07-15 22:21:34.825821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.711 [2024-07-15 22:21:34.825920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.711 [2024-07-15 22:21:34.825927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.711 [2024-07-15 22:21:34.825930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.825934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.711 [2024-07-15 22:21:34.825943] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.825947] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.825950] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.711 [2024-07-15 22:21:34.825957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.711 [2024-07-15 22:21:34.825967] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.711 [2024-07-15 22:21:34.826060] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.711 [2024-07-15 22:21:34.826066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.711 [2024-07-15 22:21:34.826070] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.711 [2024-07-15 22:21:34.826083] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.711 [2024-07-15 22:21:34.826097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.711 [2024-07-15 22:21:34.826107] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.711 [2024-07-15 22:21:34.826224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.711 [2024-07-15 22:21:34.826231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.711 [2024-07-15 22:21:34.826234] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826238] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.711 [2024-07-15 22:21:34.826247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826254] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.711 [2024-07-15 22:21:34.826261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.711 [2024-07-15 22:21:34.826271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.711 [2024-07-15 22:21:34.826376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.711 [2024-07-15 22:21:34.826383] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.711 [2024-07-15 22:21:34.826386] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826390] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.711 [2024-07-15 22:21:34.826402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.711 [2024-07-15 22:21:34.826416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.711 [2024-07-15 22:21:34.826426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.711 [2024-07-15 22:21:34.826525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.711 [2024-07-15 22:21:34.826531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.711 [2024-07-15 22:21:34.826535] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826538] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.711 [2024-07-15 22:21:34.826548] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826551] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.711 [2024-07-15 22:21:34.826561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.711 [2024-07-15 22:21:34.826571] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.711 [2024-07-15 22:21:34.826673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.711 [2024-07-15 22:21:34.826680] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.711 [2024-07-15 22:21:34.826683] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.711 [2024-07-15 22:21:34.826696] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826700] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826703] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.711 [2024-07-15 22:21:34.826710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.711 [2024-07-15 22:21:34.826720] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.711 [2024-07-15 22:21:34.826828] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.711 [2024-07-15 22:21:34.826834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.711 [2024-07-15 22:21:34.826837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826841] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.711 [2024-07-15 22:21:34.826850] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.711 [2024-07-15 22:21:34.826864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.711 [2024-07-15 22:21:34.826874] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.711 [2024-07-15 22:21:34.826979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.711 [2024-07-15 22:21:34.826985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.711 [2024-07-15 22:21:34.826988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.826992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.711 [2024-07-15 22:21:34.827002] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.827008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.827011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.711 [2024-07-15 22:21:34.827018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.711 [2024-07-15 22:21:34.827028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.711 [2024-07-15 22:21:34.827137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.711 [2024-07-15 22:21:34.827144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.711 [2024-07-15 22:21:34.827147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.827151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.711 [2024-07-15 22:21:34.827161] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.711 [2024-07-15 22:21:34.827165] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.827168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.712 [2024-07-15 22:21:34.827175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.712 [2024-07-15 22:21:34.827185] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.712 [2024-07-15 22:21:34.827281] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.712 [2024-07-15 22:21:34.827288] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.712 [2024-07-15 22:21:34.827291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.827295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.712 [2024-07-15 22:21:34.827304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.827308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.827312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.712 [2024-07-15 22:21:34.827318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.712 [2024-07-15 22:21:34.827328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.712 [2024-07-15 22:21:34.827433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.712 [2024-07-15 22:21:34.827440] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.712 [2024-07-15 22:21:34.827443] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.827447] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.712 [2024-07-15 22:21:34.827456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.827460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.827463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.712 [2024-07-15 22:21:34.827470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.712 [2024-07-15 22:21:34.827480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.712 [2024-07-15 22:21:34.827585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.712 [2024-07-15 22:21:34.827591] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.712 [2024-07-15 22:21:34.827595] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.827598] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.712 [2024-07-15 22:21:34.827608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.827612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.827617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.712 [2024-07-15 22:21:34.827624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.712 [2024-07-15 22:21:34.827634] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.712 [2024-07-15 22:21:34.827736] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.712 [2024-07-15 22:21:34.827742] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.712 [2024-07-15 22:21:34.827746] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.827749] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.712 [2024-07-15 22:21:34.827759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.827762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.827766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.712 [2024-07-15 22:21:34.827772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.712 [2024-07-15 22:21:34.827783] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.712 [2024-07-15 22:21:34.827878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.712 [2024-07-15 22:21:34.827885] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.712 [2024-07-15 22:21:34.827888] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.827892] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.712 [2024-07-15 22:21:34.827901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.827905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.827908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.712 [2024-07-15 22:21:34.827915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.712 [2024-07-15 22:21:34.827924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.712 [2024-07-15 22:21:34.828039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.712 [2024-07-15 22:21:34.828045] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.712 [2024-07-15 22:21:34.828049] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828052] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.712 [2024-07-15 22:21:34.828062] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828069] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.712 [2024-07-15 22:21:34.828075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.712 [2024-07-15 22:21:34.828085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.712 [2024-07-15 22:21:34.828189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.712 [2024-07-15 22:21:34.828196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.712 [2024-07-15 22:21:34.828199] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828203] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.712 [2024-07-15 22:21:34.828213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.712 [2024-07-15 22:21:34.828228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.712 [2024-07-15 22:21:34.828239] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.712 [2024-07-15 22:21:34.828341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.712 [2024-07-15 22:21:34.828348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.712 [2024-07-15 22:21:34.828351] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.712 [2024-07-15 22:21:34.828364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.712 [2024-07-15 22:21:34.828378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.712 [2024-07-15 22:21:34.828388] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.712 [2024-07-15 22:21:34.828478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.712 [2024-07-15 22:21:34.828484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.712 [2024-07-15 22:21:34.828487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828491] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.712 [2024-07-15 22:21:34.828500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.712 [2024-07-15 22:21:34.828514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.712 [2024-07-15 22:21:34.828524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.712 [2024-07-15 22:21:34.828644] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.712 [2024-07-15 22:21:34.828650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.712 [2024-07-15 22:21:34.828654] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.712 [2024-07-15 22:21:34.828667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828674] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.712 [2024-07-15 22:21:34.828681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.712 [2024-07-15 22:21:34.828690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.712 [2024-07-15 22:21:34.828795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.712 [2024-07-15 22:21:34.828801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.712 [2024-07-15 22:21:34.828805] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.712 [2024-07-15 22:21:34.828818] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.712 [2024-07-15 22:21:34.828832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.712 [2024-07-15 22:21:34.828843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.712 [2024-07-15 22:21:34.828946] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.712 [2024-07-15 22:21:34.828952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.712 [2024-07-15 22:21:34.828955] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828959] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.712 [2024-07-15 22:21:34.828969] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.828976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.712 [2024-07-15 22:21:34.828983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.712 [2024-07-15 22:21:34.828992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.712 [2024-07-15 22:21:34.829094] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.712 [2024-07-15 22:21:34.829101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.712 [2024-07-15 22:21:34.829104] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.712 [2024-07-15 22:21:34.829108] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.713 [2024-07-15 22:21:34.829117] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829121] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829129] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.713 [2024-07-15 22:21:34.829136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.713 [2024-07-15 22:21:34.829146] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.713 [2024-07-15 22:21:34.829248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.713 [2024-07-15 22:21:34.829255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.713 [2024-07-15 22:21:34.829258] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.713 [2024-07-15 22:21:34.829271] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829275] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.713 [2024-07-15 22:21:34.829285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.713 [2024-07-15 22:21:34.829294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.713 [2024-07-15 22:21:34.829400] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.713 [2024-07-15 22:21:34.829406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.713 [2024-07-15 22:21:34.829409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829413] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.713 [2024-07-15 22:21:34.829422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.713 [2024-07-15 22:21:34.829436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.713 [2024-07-15 22:21:34.829448] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.713 [2024-07-15 22:21:34.829549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.713 [2024-07-15 22:21:34.829555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.713 [2024-07-15 22:21:34.829559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.713 [2024-07-15 22:21:34.829572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829576] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.713 [2024-07-15 22:21:34.829586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.713 [2024-07-15 22:21:34.829596] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.713 [2024-07-15 22:21:34.829695] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.713 [2024-07-15 22:21:34.829701] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.713 [2024-07-15 22:21:34.829704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.713 [2024-07-15 22:21:34.829717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829725] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.713 [2024-07-15 22:21:34.829731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.713 [2024-07-15 22:21:34.829741] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.713 [2024-07-15 22:21:34.829853] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.713 [2024-07-15 22:21:34.829859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.713 [2024-07-15 22:21:34.829863] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.713 [2024-07-15 22:21:34.829876] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.829883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.713 [2024-07-15 22:21:34.829889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.713 [2024-07-15 22:21:34.829899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.713 [2024-07-15 22:21:34.830004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.713 [2024-07-15 22:21:34.830010] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.713 [2024-07-15 22:21:34.830014] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.830017] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.713 [2024-07-15 22:21:34.830027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.830030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.830034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.713 [2024-07-15 22:21:34.830040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.713 [2024-07-15 22:21:34.830050] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.713 [2024-07-15 22:21:34.830158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.713 [2024-07-15 22:21:34.830165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.713 [2024-07-15 22:21:34.830168] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.830172] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.713 [2024-07-15 22:21:34.830181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.830185] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.830189] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.713 [2024-07-15 22:21:34.830195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.713 [2024-07-15 22:21:34.830206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.713 [2024-07-15 22:21:34.830300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.713 [2024-07-15 22:21:34.830306] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.713 [2024-07-15 22:21:34.830309] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.830313] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.713 [2024-07-15 22:21:34.830322] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.830326] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.830329] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.713 [2024-07-15 22:21:34.830336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.713 [2024-07-15 22:21:34.830346] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.713 [2024-07-15 22:21:34.830457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.713 [2024-07-15 22:21:34.830463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.713 [2024-07-15 22:21:34.830466] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.830470] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.713 [2024-07-15 22:21:34.830479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.830483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.830486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.713 [2024-07-15 22:21:34.830493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.713 [2024-07-15 22:21:34.830503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.713 [2024-07-15 22:21:34.830608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.713 [2024-07-15 22:21:34.830614] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.713 [2024-07-15 22:21:34.830617] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.713 [2024-07-15 22:21:34.830621] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.714 [2024-07-15 22:21:34.830631] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.830634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.830638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.714 [2024-07-15 22:21:34.830645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.714 [2024-07-15 22:21:34.830654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.714 [2024-07-15 22:21:34.830759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.714 [2024-07-15 22:21:34.830767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.714 [2024-07-15 22:21:34.830771] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.830774] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.714 [2024-07-15 22:21:34.830784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.830788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.830791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.714 [2024-07-15 22:21:34.830798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.714 [2024-07-15 22:21:34.830807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.714 [2024-07-15 22:21:34.830901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.714 [2024-07-15 22:21:34.830907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.714 [2024-07-15 22:21:34.830910] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.830914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.714 [2024-07-15 22:21:34.830923] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.830927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.830930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.714 [2024-07-15 22:21:34.830937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.714 [2024-07-15 22:21:34.830947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.714 [2024-07-15 22:21:34.831061] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.714 [2024-07-15 22:21:34.831067] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.714 [2024-07-15 22:21:34.831071] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.831074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.714 [2024-07-15 22:21:34.831084] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.831087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.831091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.714 [2024-07-15 22:21:34.831097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.714 [2024-07-15 22:21:34.831107] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.714 [2024-07-15 22:21:34.835131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.714 [2024-07-15 22:21:34.835140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.714 [2024-07-15 22:21:34.835143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.835147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.714 [2024-07-15 22:21:34.835157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.835161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.835164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd3ec0) 00:24:09.714 [2024-07-15 22:21:34.835171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.714 [2024-07-15 22:21:34.835183] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e572c0, cid 3, qid 0 00:24:09.714 [2024-07-15 22:21:34.835298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.714 [2024-07-15 22:21:34.835305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.714 [2024-07-15 22:21:34.835311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.835315] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e572c0) on tqpair=0x1dd3ec0 00:24:09.714 [2024-07-15 22:21:34.835322] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 11 milliseconds 00:24:09.714 00:24:09.714 22:21:34 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:09.714 [2024-07-15 22:21:34.873515] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:24:09.714 [2024-07-15 22:21:34.873560] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2869569 ] 00:24:09.714 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.714 [2024-07-15 22:21:34.905694] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:09.714 [2024-07-15 22:21:34.905741] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:09.714 [2024-07-15 22:21:34.905746] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:09.714 [2024-07-15 22:21:34.905762] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:09.714 [2024-07-15 22:21:34.905768] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:09.714 [2024-07-15 22:21:34.909169] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:09.714 [2024-07-15 22:21:34.909200] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9c9ec0 0 00:24:09.714 [2024-07-15 22:21:34.917134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:09.714 [2024-07-15 22:21:34.917148] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:09.714 [2024-07-15 22:21:34.917152] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:09.714 [2024-07-15 22:21:34.917155] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:09.714 [2024-07-15 22:21:34.917189] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.917195] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.917199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c9ec0) 00:24:09.714 [2024-07-15 22:21:34.917210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:09.714 [2024-07-15 22:21:34.917226] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4ce40, cid 0, qid 0 00:24:09.714 [2024-07-15 22:21:34.925137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.714 [2024-07-15 22:21:34.925148] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.714 [2024-07-15 22:21:34.925152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.925157] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4ce40) on tqpair=0x9c9ec0 00:24:09.714 [2024-07-15 22:21:34.925169] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:09.714 [2024-07-15 22:21:34.925178] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:09.714 [2024-07-15 22:21:34.925184] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:09.714 [2024-07-15 22:21:34.925197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.925204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.925208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c9ec0) 00:24:09.714 [2024-07-15 22:21:34.925216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.714 [2024-07-15 22:21:34.925234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4ce40, cid 0, qid 0 00:24:09.714 [2024-07-15 22:21:34.925466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.714 [2024-07-15 22:21:34.925475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.714 [2024-07-15 22:21:34.925479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.925482] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4ce40) on tqpair=0x9c9ec0 00:24:09.714 [2024-07-15 22:21:34.925487] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:09.714 [2024-07-15 22:21:34.925494] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:09.714 [2024-07-15 22:21:34.925501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.925505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.925508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c9ec0) 00:24:09.714 [2024-07-15 22:21:34.925515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.714 [2024-07-15 22:21:34.925530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4ce40, cid 0, qid 0 00:24:09.714 [2024-07-15 22:21:34.925763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.714 [2024-07-15 22:21:34.925770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.714 [2024-07-15 22:21:34.925775] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.925781] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4ce40) on tqpair=0x9c9ec0 00:24:09.714 [2024-07-15 22:21:34.925786] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:09.714 [2024-07-15 22:21:34.925794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:09.714 [2024-07-15 22:21:34.925800] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.925804] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.925807] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c9ec0) 00:24:09.714 [2024-07-15 22:21:34.925814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.714 [2024-07-15 22:21:34.925826] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4ce40, cid 0, qid 0 00:24:09.714 [2024-07-15 22:21:34.926065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.714 [2024-07-15 22:21:34.926071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.714 [2024-07-15 22:21:34.926075] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.926082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4ce40) on tqpair=0x9c9ec0 00:24:09.714 [2024-07-15 22:21:34.926087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:09.714 [2024-07-15 22:21:34.926096] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.926100] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.714 [2024-07-15 22:21:34.926103] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c9ec0) 00:24:09.715 [2024-07-15 22:21:34.926110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.715 [2024-07-15 22:21:34.926133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4ce40, cid 0, qid 0 00:24:09.715 [2024-07-15 22:21:34.926334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.715 [2024-07-15 22:21:34.926340] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.715 [2024-07-15 22:21:34.926344] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.926347] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4ce40) on tqpair=0x9c9ec0 00:24:09.715 [2024-07-15 22:21:34.926352] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:09.715 [2024-07-15 22:21:34.926356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:09.715 [2024-07-15 22:21:34.926363] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:09.715 [2024-07-15 22:21:34.926469] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:09.715 [2024-07-15 22:21:34.926473] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:09.715 [2024-07-15 22:21:34.926481] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.926485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.926488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c9ec0) 00:24:09.715 [2024-07-15 22:21:34.926495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.715 [2024-07-15 22:21:34.926505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4ce40, cid 0, qid 0 00:24:09.715 [2024-07-15 22:21:34.926757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.715 [2024-07-15 22:21:34.926764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.715 [2024-07-15 22:21:34.926769] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.926775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4ce40) on tqpair=0x9c9ec0 00:24:09.715 [2024-07-15 22:21:34.926779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:09.715 [2024-07-15 22:21:34.926788] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.926792] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.926796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c9ec0) 00:24:09.715 [2024-07-15 22:21:34.926802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.715 [2024-07-15 22:21:34.926812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4ce40, cid 0, qid 0 00:24:09.715 [2024-07-15 22:21:34.927037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.715 [2024-07-15 22:21:34.927044] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.715 [2024-07-15 22:21:34.927047] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.927051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4ce40) on tqpair=0x9c9ec0 00:24:09.715 [2024-07-15 22:21:34.927055] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:09.715 [2024-07-15 22:21:34.927059] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:09.715 [2024-07-15 22:21:34.927067] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:09.715 [2024-07-15 22:21:34.927078] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:09.715 [2024-07-15 22:21:34.927089] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.927093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c9ec0) 00:24:09.715 [2024-07-15 22:21:34.927100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.715 [2024-07-15 22:21:34.927111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4ce40, cid 0, qid 0 00:24:09.715 [2024-07-15 22:21:34.927340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.715 [2024-07-15 22:21:34.927347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.715 [2024-07-15 22:21:34.927350] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.927354] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9c9ec0): datao=0, datal=4096, cccid=0 00:24:09.715 [2024-07-15 22:21:34.927359] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa4ce40) on tqpair(0x9c9ec0): expected_datao=0, payload_size=4096 00:24:09.715 [2024-07-15 22:21:34.927363] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.927437] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.927444] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.968380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.715 [2024-07-15 22:21:34.968394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.715 [2024-07-15 22:21:34.968398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.968402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4ce40) on tqpair=0x9c9ec0 00:24:09.715 [2024-07-15 22:21:34.968411] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:09.715 [2024-07-15 22:21:34.968419] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:09.715 [2024-07-15 22:21:34.968424] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:09.715 [2024-07-15 22:21:34.968428] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:09.715 [2024-07-15 22:21:34.968433] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:09.715 [2024-07-15 22:21:34.968438] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:09.715 [2024-07-15 22:21:34.968447] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:09.715 [2024-07-15 22:21:34.968457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.968461] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.968465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c9ec0) 00:24:09.715 [2024-07-15 22:21:34.968473] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:09.715 [2024-07-15 22:21:34.968487] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4ce40, cid 0, qid 0 00:24:09.715 [2024-07-15 22:21:34.968668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.715 [2024-07-15 22:21:34.968675] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.715 [2024-07-15 22:21:34.968679] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.968682] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4ce40) on tqpair=0x9c9ec0 00:24:09.715 [2024-07-15 22:21:34.968689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.968693] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.968700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c9ec0) 00:24:09.715 [2024-07-15 22:21:34.968709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.715 [2024-07-15 22:21:34.968717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.968721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.968724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9c9ec0) 00:24:09.715 [2024-07-15 22:21:34.968730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.715 [2024-07-15 22:21:34.968736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.968739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.968743] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9c9ec0) 00:24:09.715 [2024-07-15 22:21:34.968748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.715 [2024-07-15 22:21:34.968754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.968758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.968761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c9ec0) 00:24:09.715 [2024-07-15 22:21:34.968767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.715 [2024-07-15 22:21:34.968771] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:09.715 [2024-07-15 22:21:34.968784] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:09.715 [2024-07-15 22:21:34.968791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.968794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9c9ec0) 00:24:09.715 [2024-07-15 22:21:34.968801] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.715 [2024-07-15 22:21:34.968814] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4ce40, cid 0, qid 0 00:24:09.715 [2024-07-15 22:21:34.968819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4cfc0, cid 1, qid 0 00:24:09.715 [2024-07-15 22:21:34.968824] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d140, cid 2, qid 0 00:24:09.715 [2024-07-15 22:21:34.968829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d2c0, cid 3, qid 0 00:24:09.715 [2024-07-15 22:21:34.968833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d440, cid 4, qid 0 00:24:09.715 [2024-07-15 22:21:34.969101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.715 [2024-07-15 22:21:34.969108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.715 [2024-07-15 22:21:34.969111] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.969115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d440) on tqpair=0x9c9ec0 00:24:09.715 [2024-07-15 22:21:34.969120] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:09.715 [2024-07-15 22:21:34.973132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:09.715 [2024-07-15 22:21:34.973143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:09.715 [2024-07-15 22:21:34.973152] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:09.715 [2024-07-15 22:21:34.973162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.973166] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.715 [2024-07-15 22:21:34.973169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9c9ec0) 00:24:09.715 [2024-07-15 22:21:34.973176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:09.715 [2024-07-15 22:21:34.973189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d440, cid 4, qid 0 00:24:09.716 [2024-07-15 22:21:34.973444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.716 [2024-07-15 22:21:34.973451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.716 [2024-07-15 22:21:34.973455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.973458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d440) on tqpair=0x9c9ec0 00:24:09.716 [2024-07-15 22:21:34.973524] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:09.716 [2024-07-15 22:21:34.973535] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:09.716 [2024-07-15 22:21:34.973545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.973549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9c9ec0) 00:24:09.716 [2024-07-15 22:21:34.973555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.716 [2024-07-15 22:21:34.973566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d440, cid 4, qid 0 00:24:09.716 [2024-07-15 22:21:34.973801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.716 [2024-07-15 22:21:34.973811] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.716 [2024-07-15 22:21:34.973815] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.973819] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9c9ec0): datao=0, datal=4096, cccid=4 00:24:09.716 [2024-07-15 22:21:34.973823] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa4d440) on tqpair(0x9c9ec0): expected_datao=0, payload_size=4096 00:24:09.716 [2024-07-15 22:21:34.973827] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.973834] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.973838] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.974005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.716 [2024-07-15 22:21:34.974011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.716 [2024-07-15 22:21:34.974015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.974018] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d440) on tqpair=0x9c9ec0 00:24:09.716 [2024-07-15 22:21:34.974027] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:09.716 [2024-07-15 22:21:34.974038] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:09.716 [2024-07-15 22:21:34.974047] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:09.716 [2024-07-15 22:21:34.974057] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.974061] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9c9ec0) 00:24:09.716 [2024-07-15 22:21:34.974067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.716 [2024-07-15 22:21:34.974081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d440, cid 4, qid 0 00:24:09.716 [2024-07-15 22:21:34.974275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.716 [2024-07-15 22:21:34.974283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.716 [2024-07-15 22:21:34.974286] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.974290] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9c9ec0): datao=0, datal=4096, cccid=4 00:24:09.716 [2024-07-15 22:21:34.974294] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa4d440) on tqpair(0x9c9ec0): expected_datao=0, payload_size=4096 00:24:09.716 [2024-07-15 22:21:34.974302] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.974309] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.974313] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.974461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.716 [2024-07-15 22:21:34.974468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.716 [2024-07-15 22:21:34.974473] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.974479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d440) on tqpair=0x9c9ec0 00:24:09.716 [2024-07-15 22:21:34.974491] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:09.716 [2024-07-15 22:21:34.974500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:09.716 [2024-07-15 22:21:34.974507] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.974511] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9c9ec0) 00:24:09.716 [2024-07-15 22:21:34.974517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.716 [2024-07-15 22:21:34.974532] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d440, cid 4, qid 0 00:24:09.716 [2024-07-15 22:21:34.974796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.716 [2024-07-15 22:21:34.974806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.716 [2024-07-15 22:21:34.974810] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.974813] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9c9ec0): datao=0, datal=4096, cccid=4 00:24:09.716 [2024-07-15 22:21:34.974817] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa4d440) on tqpair(0x9c9ec0): expected_datao=0, payload_size=4096 00:24:09.716 [2024-07-15 22:21:34.974822] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.974828] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.974832] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.974958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.716 [2024-07-15 22:21:34.974964] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.716 [2024-07-15 22:21:34.974970] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.974976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d440) on tqpair=0x9c9ec0 00:24:09.716 [2024-07-15 22:21:34.974982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:09.716 [2024-07-15 22:21:34.974990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:09.716 [2024-07-15 22:21:34.975001] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:09.716 [2024-07-15 22:21:34.975007] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:09.716 [2024-07-15 22:21:34.975013] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:09.716 [2024-07-15 22:21:34.975020] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:09.716 [2024-07-15 22:21:34.975027] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:09.716 [2024-07-15 22:21:34.975032] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:09.716 [2024-07-15 22:21:34.975037] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:09.716 [2024-07-15 22:21:34.975051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.975055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9c9ec0) 00:24:09.716 [2024-07-15 22:21:34.975062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.716 [2024-07-15 22:21:34.975068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.975072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.975075] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9c9ec0) 00:24:09.716 [2024-07-15 22:21:34.975081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.716 [2024-07-15 22:21:34.975095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d440, cid 4, qid 0 00:24:09.716 [2024-07-15 22:21:34.975101] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d5c0, cid 5, qid 0 00:24:09.716 [2024-07-15 22:21:34.975345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.716 [2024-07-15 22:21:34.975353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.716 [2024-07-15 22:21:34.975357] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.975360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d440) on tqpair=0x9c9ec0 00:24:09.716 [2024-07-15 22:21:34.975367] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.716 [2024-07-15 22:21:34.975373] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.716 [2024-07-15 22:21:34.975376] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.975379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d5c0) on tqpair=0x9c9ec0 00:24:09.716 [2024-07-15 22:21:34.975389] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.975396] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9c9ec0) 00:24:09.716 [2024-07-15 22:21:34.975403] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.716 [2024-07-15 22:21:34.975414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d5c0, cid 5, qid 0 00:24:09.716 [2024-07-15 22:21:34.975599] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.716 [2024-07-15 22:21:34.975606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.716 [2024-07-15 22:21:34.975610] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.975617] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d5c0) on tqpair=0x9c9ec0 00:24:09.716 [2024-07-15 22:21:34.975626] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.975630] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9c9ec0) 00:24:09.716 [2024-07-15 22:21:34.975636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.716 [2024-07-15 22:21:34.975657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d5c0, cid 5, qid 0 00:24:09.716 [2024-07-15 22:21:34.975907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.716 [2024-07-15 22:21:34.975914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.716 [2024-07-15 22:21:34.975917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.975921] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d5c0) on tqpair=0x9c9ec0 00:24:09.716 [2024-07-15 22:21:34.975930] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.716 [2024-07-15 22:21:34.975933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9c9ec0) 00:24:09.716 [2024-07-15 22:21:34.975940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.716 [2024-07-15 22:21:34.975953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d5c0, cid 5, qid 0 00:24:09.716 [2024-07-15 22:21:34.976183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.716 [2024-07-15 22:21:34.976190] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.717 [2024-07-15 22:21:34.976195] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d5c0) on tqpair=0x9c9ec0 00:24:09.717 [2024-07-15 22:21:34.976216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9c9ec0) 00:24:09.717 [2024-07-15 22:21:34.976227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.717 [2024-07-15 22:21:34.976234] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9c9ec0) 00:24:09.717 [2024-07-15 22:21:34.976244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.717 [2024-07-15 22:21:34.976255] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x9c9ec0) 00:24:09.717 [2024-07-15 22:21:34.976266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.717 [2024-07-15 22:21:34.976274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9c9ec0) 00:24:09.717 [2024-07-15 22:21:34.976283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.717 [2024-07-15 22:21:34.976296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d5c0, cid 5, qid 0 00:24:09.717 [2024-07-15 22:21:34.976301] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d440, cid 4, qid 0 00:24:09.717 [2024-07-15 22:21:34.976305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d740, cid 6, qid 0 00:24:09.717 [2024-07-15 22:21:34.976310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d8c0, cid 7, qid 0 00:24:09.717 [2024-07-15 22:21:34.976607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.717 [2024-07-15 22:21:34.976618] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.717 [2024-07-15 22:21:34.976624] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976630] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9c9ec0): datao=0, datal=8192, cccid=5 00:24:09.717 [2024-07-15 22:21:34.976638] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa4d5c0) on tqpair(0x9c9ec0): expected_datao=0, payload_size=8192 00:24:09.717 [2024-07-15 22:21:34.976648] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976750] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976756] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.717 [2024-07-15 22:21:34.976768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.717 [2024-07-15 22:21:34.976771] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976774] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9c9ec0): datao=0, datal=512, cccid=4 00:24:09.717 [2024-07-15 22:21:34.976779] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa4d440) on tqpair(0x9c9ec0): expected_datao=0, payload_size=512 00:24:09.717 [2024-07-15 22:21:34.976783] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976789] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976793] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.717 [2024-07-15 22:21:34.976809] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.717 [2024-07-15 22:21:34.976815] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976822] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9c9ec0): datao=0, datal=512, cccid=6 00:24:09.717 [2024-07-15 22:21:34.976829] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa4d740) on tqpair(0x9c9ec0): expected_datao=0, payload_size=512 00:24:09.717 [2024-07-15 22:21:34.976836] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976847] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976853] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976863] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.717 [2024-07-15 22:21:34.976872] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.717 [2024-07-15 22:21:34.976876] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976879] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9c9ec0): datao=0, datal=4096, cccid=7 00:24:09.717 [2024-07-15 22:21:34.976883] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa4d8c0) on tqpair(0x9c9ec0): expected_datao=0, payload_size=4096 00:24:09.717 [2024-07-15 22:21:34.976887] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976894] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976897] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.717 [2024-07-15 22:21:34.976910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.717 [2024-07-15 22:21:34.976913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d5c0) on tqpair=0x9c9ec0 00:24:09.717 [2024-07-15 22:21:34.976930] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.717 [2024-07-15 22:21:34.976936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.717 [2024-07-15 22:21:34.976939] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976943] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d440) on tqpair=0x9c9ec0 00:24:09.717 [2024-07-15 22:21:34.976953] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.717 [2024-07-15 22:21:34.976959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.717 [2024-07-15 22:21:34.976962] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976968] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d740) on tqpair=0x9c9ec0 00:24:09.717 [2024-07-15 22:21:34.976975] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.717 [2024-07-15 22:21:34.976981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.717 [2024-07-15 22:21:34.976984] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.717 [2024-07-15 22:21:34.976988] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d8c0) on tqpair=0x9c9ec0 00:24:09.717 ===================================================== 00:24:09.717 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:09.717 ===================================================== 00:24:09.717 Controller Capabilities/Features 00:24:09.717 ================================ 00:24:09.717 Vendor ID: 8086 00:24:09.717 Subsystem Vendor ID: 8086 00:24:09.717 Serial Number: SPDK00000000000001 00:24:09.717 Model Number: SPDK bdev Controller 00:24:09.717 Firmware Version: 24.09 00:24:09.717 Recommended Arb Burst: 6 00:24:09.717 IEEE OUI Identifier: e4 d2 5c 00:24:09.717 Multi-path I/O 00:24:09.717 May have multiple subsystem ports: Yes 00:24:09.717 May have multiple controllers: Yes 00:24:09.717 Associated with SR-IOV VF: No 00:24:09.717 Max Data Transfer Size: 131072 00:24:09.717 Max Number of Namespaces: 32 00:24:09.717 Max Number of I/O Queues: 127 00:24:09.717 NVMe Specification Version (VS): 1.3 00:24:09.717 NVMe Specification Version (Identify): 1.3 00:24:09.717 Maximum Queue Entries: 128 00:24:09.717 Contiguous Queues Required: Yes 00:24:09.717 Arbitration Mechanisms Supported 00:24:09.717 Weighted Round Robin: Not Supported 00:24:09.717 Vendor Specific: Not Supported 00:24:09.717 Reset Timeout: 15000 ms 00:24:09.717 Doorbell Stride: 4 bytes 00:24:09.717 NVM Subsystem Reset: Not Supported 00:24:09.717 Command Sets Supported 00:24:09.717 NVM Command Set: Supported 00:24:09.717 Boot Partition: Not Supported 00:24:09.717 Memory Page Size Minimum: 4096 bytes 00:24:09.717 Memory Page Size Maximum: 4096 bytes 00:24:09.717 Persistent Memory Region: Not Supported 00:24:09.717 Optional Asynchronous Events Supported 00:24:09.717 Namespace Attribute Notices: Supported 00:24:09.717 Firmware Activation Notices: Not Supported 00:24:09.717 ANA Change Notices: Not Supported 00:24:09.717 PLE Aggregate Log Change Notices: Not Supported 00:24:09.717 LBA Status Info Alert Notices: Not Supported 00:24:09.717 EGE Aggregate Log Change Notices: Not Supported 00:24:09.717 Normal NVM Subsystem Shutdown event: Not Supported 00:24:09.717 Zone Descriptor Change Notices: Not Supported 00:24:09.717 Discovery Log Change Notices: Not Supported 00:24:09.717 Controller Attributes 00:24:09.717 128-bit Host Identifier: Supported 00:24:09.717 Non-Operational Permissive Mode: Not Supported 00:24:09.717 NVM Sets: Not Supported 00:24:09.717 Read Recovery Levels: Not Supported 00:24:09.717 Endurance Groups: Not Supported 00:24:09.717 Predictable Latency Mode: Not Supported 00:24:09.717 Traffic Based Keep ALive: Not Supported 00:24:09.717 Namespace Granularity: Not Supported 00:24:09.718 SQ Associations: Not Supported 00:24:09.718 UUID List: Not Supported 00:24:09.718 Multi-Domain Subsystem: Not Supported 00:24:09.718 Fixed Capacity Management: Not Supported 00:24:09.718 Variable Capacity Management: Not Supported 00:24:09.718 Delete Endurance Group: Not Supported 00:24:09.718 Delete NVM Set: Not Supported 00:24:09.718 Extended LBA Formats Supported: Not Supported 00:24:09.718 Flexible Data Placement Supported: Not Supported 00:24:09.718 00:24:09.718 Controller Memory Buffer Support 00:24:09.718 ================================ 00:24:09.718 Supported: No 00:24:09.718 00:24:09.718 Persistent Memory Region Support 00:24:09.718 ================================ 00:24:09.718 Supported: No 00:24:09.718 00:24:09.718 Admin Command Set Attributes 00:24:09.718 ============================ 00:24:09.718 Security Send/Receive: Not Supported 00:24:09.718 Format NVM: Not Supported 00:24:09.718 Firmware Activate/Download: Not Supported 00:24:09.718 Namespace Management: Not Supported 00:24:09.718 Device Self-Test: Not Supported 00:24:09.718 Directives: Not Supported 00:24:09.718 NVMe-MI: Not Supported 00:24:09.718 Virtualization Management: Not Supported 00:24:09.718 Doorbell Buffer Config: Not Supported 00:24:09.718 Get LBA Status Capability: Not Supported 00:24:09.718 Command & Feature Lockdown Capability: Not Supported 00:24:09.718 Abort Command Limit: 4 00:24:09.718 Async Event Request Limit: 4 00:24:09.718 Number of Firmware Slots: N/A 00:24:09.718 Firmware Slot 1 Read-Only: N/A 00:24:09.718 Firmware Activation Without Reset: N/A 00:24:09.718 Multiple Update Detection Support: N/A 00:24:09.718 Firmware Update Granularity: No Information Provided 00:24:09.718 Per-Namespace SMART Log: No 00:24:09.718 Asymmetric Namespace Access Log Page: Not Supported 00:24:09.718 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:09.718 Command Effects Log Page: Supported 00:24:09.718 Get Log Page Extended Data: Supported 00:24:09.718 Telemetry Log Pages: Not Supported 00:24:09.718 Persistent Event Log Pages: Not Supported 00:24:09.718 Supported Log Pages Log Page: May Support 00:24:09.718 Commands Supported & Effects Log Page: Not Supported 00:24:09.718 Feature Identifiers & Effects Log Page:May Support 00:24:09.718 NVMe-MI Commands & Effects Log Page: May Support 00:24:09.718 Data Area 4 for Telemetry Log: Not Supported 00:24:09.718 Error Log Page Entries Supported: 128 00:24:09.718 Keep Alive: Supported 00:24:09.718 Keep Alive Granularity: 10000 ms 00:24:09.718 00:24:09.718 NVM Command Set Attributes 00:24:09.718 ========================== 00:24:09.718 Submission Queue Entry Size 00:24:09.718 Max: 64 00:24:09.718 Min: 64 00:24:09.718 Completion Queue Entry Size 00:24:09.718 Max: 16 00:24:09.718 Min: 16 00:24:09.718 Number of Namespaces: 32 00:24:09.718 Compare Command: Supported 00:24:09.718 Write Uncorrectable Command: Not Supported 00:24:09.718 Dataset Management Command: Supported 00:24:09.718 Write Zeroes Command: Supported 00:24:09.718 Set Features Save Field: Not Supported 00:24:09.718 Reservations: Supported 00:24:09.718 Timestamp: Not Supported 00:24:09.718 Copy: Supported 00:24:09.718 Volatile Write Cache: Present 00:24:09.718 Atomic Write Unit (Normal): 1 00:24:09.718 Atomic Write Unit (PFail): 1 00:24:09.718 Atomic Compare & Write Unit: 1 00:24:09.718 Fused Compare & Write: Supported 00:24:09.718 Scatter-Gather List 00:24:09.718 SGL Command Set: Supported 00:24:09.718 SGL Keyed: Supported 00:24:09.718 SGL Bit Bucket Descriptor: Not Supported 00:24:09.718 SGL Metadata Pointer: Not Supported 00:24:09.718 Oversized SGL: Not Supported 00:24:09.718 SGL Metadata Address: Not Supported 00:24:09.718 SGL Offset: Supported 00:24:09.718 Transport SGL Data Block: Not Supported 00:24:09.718 Replay Protected Memory Block: Not Supported 00:24:09.718 00:24:09.718 Firmware Slot Information 00:24:09.718 ========================= 00:24:09.718 Active slot: 1 00:24:09.718 Slot 1 Firmware Revision: 24.09 00:24:09.718 00:24:09.718 00:24:09.718 Commands Supported and Effects 00:24:09.718 ============================== 00:24:09.718 Admin Commands 00:24:09.718 -------------- 00:24:09.718 Get Log Page (02h): Supported 00:24:09.718 Identify (06h): Supported 00:24:09.718 Abort (08h): Supported 00:24:09.718 Set Features (09h): Supported 00:24:09.718 Get Features (0Ah): Supported 00:24:09.718 Asynchronous Event Request (0Ch): Supported 00:24:09.718 Keep Alive (18h): Supported 00:24:09.718 I/O Commands 00:24:09.718 ------------ 00:24:09.718 Flush (00h): Supported LBA-Change 00:24:09.718 Write (01h): Supported LBA-Change 00:24:09.718 Read (02h): Supported 00:24:09.718 Compare (05h): Supported 00:24:09.718 Write Zeroes (08h): Supported LBA-Change 00:24:09.718 Dataset Management (09h): Supported LBA-Change 00:24:09.718 Copy (19h): Supported LBA-Change 00:24:09.718 00:24:09.718 Error Log 00:24:09.718 ========= 00:24:09.718 00:24:09.718 Arbitration 00:24:09.718 =========== 00:24:09.718 Arbitration Burst: 1 00:24:09.718 00:24:09.718 Power Management 00:24:09.718 ================ 00:24:09.718 Number of Power States: 1 00:24:09.718 Current Power State: Power State #0 00:24:09.718 Power State #0: 00:24:09.718 Max Power: 0.00 W 00:24:09.718 Non-Operational State: Operational 00:24:09.718 Entry Latency: Not Reported 00:24:09.718 Exit Latency: Not Reported 00:24:09.718 Relative Read Throughput: 0 00:24:09.718 Relative Read Latency: 0 00:24:09.718 Relative Write Throughput: 0 00:24:09.718 Relative Write Latency: 0 00:24:09.718 Idle Power: Not Reported 00:24:09.718 Active Power: Not Reported 00:24:09.718 Non-Operational Permissive Mode: Not Supported 00:24:09.718 00:24:09.718 Health Information 00:24:09.718 ================== 00:24:09.718 Critical Warnings: 00:24:09.718 Available Spare Space: OK 00:24:09.718 Temperature: OK 00:24:09.718 Device Reliability: OK 00:24:09.718 Read Only: No 00:24:09.718 Volatile Memory Backup: OK 00:24:09.718 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:09.718 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:09.718 Available Spare: 0% 00:24:09.718 Available Spare Threshold: 0% 00:24:09.718 Life Percentage Used:[2024-07-15 22:21:34.977090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.718 [2024-07-15 22:21:34.977095] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9c9ec0) 00:24:09.718 [2024-07-15 22:21:34.977102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.718 [2024-07-15 22:21:34.977116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d8c0, cid 7, qid 0 00:24:09.718 [2024-07-15 22:21:34.981131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.718 [2024-07-15 22:21:34.981141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.718 [2024-07-15 22:21:34.981144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.718 [2024-07-15 22:21:34.981148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d8c0) on tqpair=0x9c9ec0 00:24:09.718 [2024-07-15 22:21:34.981185] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:09.718 [2024-07-15 22:21:34.981195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4ce40) on tqpair=0x9c9ec0 00:24:09.718 [2024-07-15 22:21:34.981201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.718 [2024-07-15 22:21:34.981206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4cfc0) on tqpair=0x9c9ec0 00:24:09.718 [2024-07-15 22:21:34.981214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.718 [2024-07-15 22:21:34.981219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d140) on tqpair=0x9c9ec0 00:24:09.718 [2024-07-15 22:21:34.981224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.718 [2024-07-15 22:21:34.981229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d2c0) on tqpair=0x9c9ec0 00:24:09.718 [2024-07-15 22:21:34.981233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.718 [2024-07-15 22:21:34.981242] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.718 [2024-07-15 22:21:34.981245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.718 [2024-07-15 22:21:34.981249] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c9ec0) 00:24:09.718 [2024-07-15 22:21:34.981256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.718 [2024-07-15 22:21:34.981270] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d2c0, cid 3, qid 0 00:24:09.718 [2024-07-15 22:21:34.981485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.718 [2024-07-15 22:21:34.981492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.718 [2024-07-15 22:21:34.981499] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.718 [2024-07-15 22:21:34.981503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d2c0) on tqpair=0x9c9ec0 00:24:09.718 [2024-07-15 22:21:34.981509] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.718 [2024-07-15 22:21:34.981513] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.718 [2024-07-15 22:21:34.981517] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c9ec0) 00:24:09.718 [2024-07-15 22:21:34.981523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.718 [2024-07-15 22:21:34.981539] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d2c0, cid 3, qid 0 00:24:09.718 [2024-07-15 22:21:34.981720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.718 [2024-07-15 22:21:34.981727] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.718 [2024-07-15 22:21:34.981731] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.718 [2024-07-15 22:21:34.981738] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d2c0) on tqpair=0x9c9ec0 00:24:09.719 [2024-07-15 22:21:34.981742] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:09.719 [2024-07-15 22:21:34.981746] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:09.719 [2024-07-15 22:21:34.981756] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.981759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.981763] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c9ec0) 00:24:09.719 [2024-07-15 22:21:34.981769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.719 [2024-07-15 22:21:34.981780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d2c0, cid 3, qid 0 00:24:09.719 [2024-07-15 22:21:34.982004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.719 [2024-07-15 22:21:34.982010] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.719 [2024-07-15 22:21:34.982014] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.982021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d2c0) on tqpair=0x9c9ec0 00:24:09.719 [2024-07-15 22:21:34.982031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.982035] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.982039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c9ec0) 00:24:09.719 [2024-07-15 22:21:34.982045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.719 [2024-07-15 22:21:34.982055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d2c0, cid 3, qid 0 00:24:09.719 [2024-07-15 22:21:34.982271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.719 [2024-07-15 22:21:34.982279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.719 [2024-07-15 22:21:34.982285] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.982290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d2c0) on tqpair=0x9c9ec0 00:24:09.719 [2024-07-15 22:21:34.982300] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.982304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.982307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c9ec0) 00:24:09.719 [2024-07-15 22:21:34.982314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.719 [2024-07-15 22:21:34.982324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d2c0, cid 3, qid 0 00:24:09.719 [2024-07-15 22:21:34.982523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.719 [2024-07-15 22:21:34.982530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.719 [2024-07-15 22:21:34.982534] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.982541] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d2c0) on tqpair=0x9c9ec0 00:24:09.719 [2024-07-15 22:21:34.982551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.982554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.982558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c9ec0) 00:24:09.719 [2024-07-15 22:21:34.982567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.719 [2024-07-15 22:21:34.982577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d2c0, cid 3, qid 0 00:24:09.719 [2024-07-15 22:21:34.982799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.719 [2024-07-15 22:21:34.982805] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.719 [2024-07-15 22:21:34.982809] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.982816] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d2c0) on tqpair=0x9c9ec0 00:24:09.719 [2024-07-15 22:21:34.982826] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.982829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.982833] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c9ec0) 00:24:09.719 [2024-07-15 22:21:34.982839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.719 [2024-07-15 22:21:34.982849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d2c0, cid 3, qid 0 00:24:09.719 [2024-07-15 22:21:34.983074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.719 [2024-07-15 22:21:34.983080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.719 [2024-07-15 22:21:34.983084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.983091] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d2c0) on tqpair=0x9c9ec0 00:24:09.719 [2024-07-15 22:21:34.983101] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.983104] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.983108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c9ec0) 00:24:09.719 [2024-07-15 22:21:34.983114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.719 [2024-07-15 22:21:34.983129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d2c0, cid 3, qid 0 00:24:09.719 [2024-07-15 22:21:34.983356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.719 [2024-07-15 22:21:34.983362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.719 [2024-07-15 22:21:34.983367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.983373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d2c0) on tqpair=0x9c9ec0 00:24:09.719 [2024-07-15 22:21:34.983383] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.983387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.983390] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c9ec0) 00:24:09.719 [2024-07-15 22:21:34.983397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.719 [2024-07-15 22:21:34.983407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d2c0, cid 3, qid 0 00:24:09.719 [2024-07-15 22:21:34.983631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.719 [2024-07-15 22:21:34.983637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.719 [2024-07-15 22:21:34.983641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.983646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d2c0) on tqpair=0x9c9ec0 00:24:09.719 [2024-07-15 22:21:34.983658] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.983662] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.983666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c9ec0) 00:24:09.719 [2024-07-15 22:21:34.983672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.719 [2024-07-15 22:21:34.983684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d2c0, cid 3, qid 0 00:24:09.719 [2024-07-15 22:21:34.983908] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.719 [2024-07-15 22:21:34.983914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.719 [2024-07-15 22:21:34.983918] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.983925] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d2c0) on tqpair=0x9c9ec0 00:24:09.719 [2024-07-15 22:21:34.983934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.983938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.983942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c9ec0) 00:24:09.719 [2024-07-15 22:21:34.983948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.719 [2024-07-15 22:21:34.983958] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d2c0, cid 3, qid 0 00:24:09.719 [2024-07-15 22:21:34.984188] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.719 [2024-07-15 22:21:34.984195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.719 [2024-07-15 22:21:34.984200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.984206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d2c0) on tqpair=0x9c9ec0 00:24:09.719 [2024-07-15 22:21:34.984215] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.984219] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.984223] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c9ec0) 00:24:09.719 [2024-07-15 22:21:34.984229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.719 [2024-07-15 22:21:34.984239] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d2c0, cid 3, qid 0 00:24:09.719 [2024-07-15 22:21:34.984415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.719 [2024-07-15 22:21:34.984421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.719 [2024-07-15 22:21:34.984425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.984432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d2c0) on tqpair=0x9c9ec0 00:24:09.719 [2024-07-15 22:21:34.984442] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.984446] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.984449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c9ec0) 00:24:09.719 [2024-07-15 22:21:34.984456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.719 [2024-07-15 22:21:34.984465] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d2c0, cid 3, qid 0 00:24:09.719 [2024-07-15 22:21:34.984681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.719 [2024-07-15 22:21:34.984687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.719 [2024-07-15 22:21:34.984691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.984698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d2c0) on tqpair=0x9c9ec0 00:24:09.719 [2024-07-15 22:21:34.984708] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.984712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.984715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c9ec0) 00:24:09.719 [2024-07-15 22:21:34.984722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.719 [2024-07-15 22:21:34.984731] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d2c0, cid 3, qid 0 00:24:09.719 [2024-07-15 22:21:34.984955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.719 [2024-07-15 22:21:34.984962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.719 [2024-07-15 22:21:34.984966] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.984973] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d2c0) on tqpair=0x9c9ec0 00:24:09.719 [2024-07-15 22:21:34.984983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.984986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.719 [2024-07-15 22:21:34.984990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c9ec0) 00:24:09.719 [2024-07-15 22:21:34.984996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.720 [2024-07-15 22:21:34.985006] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa4d2c0, cid 3, qid 0 00:24:09.720 [2024-07-15 22:21:34.989133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.720 [2024-07-15 22:21:34.989143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.720 [2024-07-15 22:21:34.989147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.720 [2024-07-15 22:21:34.989151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa4d2c0) on tqpair=0x9c9ec0 00:24:09.720 [2024-07-15 22:21:34.989159] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:24:09.720 0% 00:24:09.720 Data Units Read: 0 00:24:09.720 Data Units Written: 0 00:24:09.720 Host Read Commands: 0 00:24:09.720 Host Write Commands: 0 00:24:09.720 Controller Busy Time: 0 minutes 00:24:09.720 Power Cycles: 0 00:24:09.720 Power On Hours: 0 hours 00:24:09.720 Unsafe Shutdowns: 0 00:24:09.720 Unrecoverable Media Errors: 0 00:24:09.720 Lifetime Error Log Entries: 0 00:24:09.720 Warning Temperature Time: 0 minutes 00:24:09.720 Critical Temperature Time: 0 minutes 00:24:09.720 00:24:09.720 Number of Queues 00:24:09.720 ================ 00:24:09.720 Number of I/O Submission Queues: 127 00:24:09.720 Number of I/O Completion Queues: 127 00:24:09.720 00:24:09.720 Active Namespaces 00:24:09.720 ================= 00:24:09.720 Namespace ID:1 00:24:09.720 Error Recovery Timeout: Unlimited 00:24:09.720 Command Set Identifier: NVM (00h) 00:24:09.720 Deallocate: Supported 00:24:09.720 Deallocated/Unwritten Error: Not Supported 00:24:09.720 Deallocated Read Value: Unknown 00:24:09.720 Deallocate in Write Zeroes: Not Supported 00:24:09.720 Deallocated Guard Field: 0xFFFF 00:24:09.720 Flush: Supported 00:24:09.720 Reservation: Supported 00:24:09.720 Namespace Sharing Capabilities: Multiple Controllers 00:24:09.720 Size (in LBAs): 131072 (0GiB) 00:24:09.720 Capacity (in LBAs): 131072 (0GiB) 00:24:09.720 Utilization (in LBAs): 131072 (0GiB) 00:24:09.720 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:09.720 EUI64: ABCDEF0123456789 00:24:09.720 UUID: 5e96435b-4026-4d97-a4dd-5892204428c8 00:24:09.720 Thin Provisioning: Not Supported 00:24:09.720 Per-NS Atomic Units: Yes 00:24:09.720 Atomic Boundary Size (Normal): 0 00:24:09.720 Atomic Boundary Size (PFail): 0 00:24:09.720 Atomic Boundary Offset: 0 00:24:09.720 Maximum Single Source Range Length: 65535 00:24:09.720 Maximum Copy Length: 65535 00:24:09.720 Maximum Source Range Count: 1 00:24:09.720 NGUID/EUI64 Never Reused: No 00:24:09.720 Namespace Write Protected: No 00:24:09.720 Number of LBA Formats: 1 00:24:09.720 Current LBA Format: LBA Format #00 00:24:09.720 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:09.720 00:24:09.720 22:21:34 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:09.720 22:21:35 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.720 22:21:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.720 22:21:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.720 22:21:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.720 22:21:35 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:09.720 22:21:35 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:09.720 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:09.720 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:09.720 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:09.720 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:09.720 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:09.720 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:09.980 rmmod nvme_tcp 00:24:09.980 rmmod nvme_fabrics 00:24:09.980 rmmod nvme_keyring 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2869216 ']' 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2869216 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2869216 ']' 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2869216 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2869216 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2869216' 00:24:09.980 killing process with pid 2869216 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2869216 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2869216 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.980 22:21:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.523 22:21:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:12.523 00:24:12.523 real 0m11.026s 00:24:12.523 user 0m7.947s 00:24:12.523 sys 0m5.698s 00:24:12.523 22:21:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.523 22:21:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.523 ************************************ 00:24:12.523 END TEST nvmf_identify 00:24:12.523 ************************************ 00:24:12.523 22:21:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:12.523 22:21:37 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:12.523 22:21:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:12.523 22:21:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.523 22:21:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:12.523 ************************************ 00:24:12.523 START TEST nvmf_perf 00:24:12.523 ************************************ 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:12.523 * Looking for test storage... 00:24:12.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:12.523 22:21:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:19.163 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:19.163 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:19.164 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:19.164 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:19.164 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.164 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:19.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:24:19.425 00:24:19.425 --- 10.0.0.2 ping statistics --- 00:24:19.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.425 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:24:19.425 00:24:19.425 --- 10.0.0.1 ping statistics --- 00:24:19.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.425 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2873571 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2873571 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2873571 ']' 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.425 22:21:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.683 [2024-07-15 22:21:44.764514] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:24:19.683 [2024-07-15 22:21:44.764589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.684 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.684 [2024-07-15 22:21:44.836104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:19.684 [2024-07-15 22:21:44.913655] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.684 [2024-07-15 22:21:44.913693] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.684 [2024-07-15 22:21:44.913701] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.684 [2024-07-15 22:21:44.913709] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.684 [2024-07-15 22:21:44.913714] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.684 [2024-07-15 22:21:44.913860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.684 [2024-07-15 22:21:44.913978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.684 [2024-07-15 22:21:44.914181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:19.684 [2024-07-15 22:21:44.914190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.249 22:21:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:20.249 22:21:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:20.249 22:21:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:20.249 22:21:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:20.249 22:21:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:20.506 22:21:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.506 22:21:45 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:20.506 22:21:45 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:20.763 22:21:46 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:20.763 22:21:46 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:21.021 22:21:46 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:21.021 22:21:46 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:21.279 22:21:46 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:21.279 22:21:46 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:21.279 22:21:46 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:21.279 22:21:46 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:21.279 22:21:46 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:21.279 [2024-07-15 22:21:46.561284] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.279 22:21:46 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:21.537 22:21:46 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:21.537 22:21:46 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:21.794 22:21:46 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:21.794 22:21:46 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:21.794 22:21:47 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.052 [2024-07-15 22:21:47.227680] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.052 22:21:47 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:22.310 22:21:47 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:22.310 22:21:47 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:22.310 22:21:47 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:22.310 22:21:47 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:23.682 Initializing NVMe Controllers 00:24:23.682 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:23.682 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:23.682 Initialization complete. Launching workers. 00:24:23.682 ======================================================== 00:24:23.682 Latency(us) 00:24:23.682 Device Information : IOPS MiB/s Average min max 00:24:23.682 PCIE (0000:65:00.0) NSID 1 from core 0: 79755.47 311.54 400.68 13.47 4844.07 00:24:23.682 ======================================================== 00:24:23.682 Total : 79755.47 311.54 400.68 13.47 4844.07 00:24:23.682 00:24:23.682 22:21:48 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.682 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.056 Initializing NVMe Controllers 00:24:25.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:25.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:25.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:25.056 Initialization complete. Launching workers. 00:24:25.056 ======================================================== 00:24:25.056 Latency(us) 00:24:25.056 Device Information : IOPS MiB/s Average min max 00:24:25.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 97.00 0.38 10696.27 488.72 46466.84 00:24:25.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 70.00 0.27 14821.66 7953.76 47889.72 00:24:25.056 ======================================================== 00:24:25.056 Total : 167.00 0.65 12425.48 488.72 47889.72 00:24:25.056 00:24:25.057 22:21:49 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:25.057 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.431 Initializing NVMe Controllers 00:24:26.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:26.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:26.431 Initialization complete. Launching workers. 00:24:26.431 ======================================================== 00:24:26.431 Latency(us) 00:24:26.431 Device Information : IOPS MiB/s Average min max 00:24:26.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9296.98 36.32 3450.46 578.38 7908.31 00:24:26.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3914.99 15.29 8210.67 5733.39 16311.62 00:24:26.431 ======================================================== 00:24:26.431 Total : 13211.97 51.61 4861.01 578.38 16311.62 00:24:26.431 00:24:26.431 22:21:51 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:26.431 22:21:51 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:26.431 22:21:51 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:26.431 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.962 Initializing NVMe Controllers 00:24:28.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.962 Controller IO queue size 128, less than required. 00:24:28.962 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:28.963 Controller IO queue size 128, less than required. 00:24:28.963 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:28.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:28.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:28.963 Initialization complete. Launching workers. 00:24:28.963 ======================================================== 00:24:28.963 Latency(us) 00:24:28.963 Device Information : IOPS MiB/s Average min max 00:24:28.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 970.99 242.75 135520.76 79119.62 199350.01 00:24:28.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 617.99 154.50 218642.74 82244.66 342024.18 00:24:28.963 ======================================================== 00:24:28.963 Total : 1588.98 397.25 167848.88 79119.62 342024.18 00:24:28.963 00:24:28.963 22:21:53 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:28.963 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.963 No valid NVMe controllers or AIO or URING devices found 00:24:28.963 Initializing NVMe Controllers 00:24:28.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.963 Controller IO queue size 128, less than required. 00:24:28.963 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:28.963 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:28.963 Controller IO queue size 128, less than required. 00:24:28.963 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:28.963 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:28.963 WARNING: Some requested NVMe devices were skipped 00:24:28.963 22:21:54 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:28.963 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.641 Initializing NVMe Controllers 00:24:31.641 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:31.641 Controller IO queue size 128, less than required. 00:24:31.641 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.641 Controller IO queue size 128, less than required. 00:24:31.641 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:31.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:31.641 Initialization complete. Launching workers. 00:24:31.641 00:24:31.641 ==================== 00:24:31.641 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:31.641 TCP transport: 00:24:31.641 polls: 44156 00:24:31.641 idle_polls: 16262 00:24:31.641 sock_completions: 27894 00:24:31.641 nvme_completions: 3739 00:24:31.641 submitted_requests: 5638 00:24:31.641 queued_requests: 1 00:24:31.641 00:24:31.641 ==================== 00:24:31.641 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:31.641 TCP transport: 00:24:31.641 polls: 43725 00:24:31.641 idle_polls: 12400 00:24:31.641 sock_completions: 31325 00:24:31.641 nvme_completions: 4125 00:24:31.641 submitted_requests: 6128 00:24:31.641 queued_requests: 1 00:24:31.641 ======================================================== 00:24:31.641 Latency(us) 00:24:31.641 Device Information : IOPS MiB/s Average min max 00:24:31.641 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 933.37 233.34 141998.48 66466.12 231172.62 00:24:31.641 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1029.76 257.44 128224.18 61376.22 179785.74 00:24:31.641 ======================================================== 00:24:31.641 Total : 1963.13 490.78 134773.19 61376.22 231172.62 00:24:31.641 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:31.641 rmmod nvme_tcp 00:24:31.641 rmmod nvme_fabrics 00:24:31.641 rmmod nvme_keyring 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2873571 ']' 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2873571 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2873571 ']' 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2873571 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2873571 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2873571' 00:24:31.641 killing process with pid 2873571 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2873571 00:24:31.641 22:21:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2873571 00:24:33.539 22:21:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:33.539 22:21:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:33.539 22:21:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:33.539 22:21:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:33.539 22:21:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:33.539 22:21:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.539 22:21:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.539 22:21:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.082 22:22:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:36.082 00:24:36.082 real 0m23.486s 00:24:36.082 user 0m57.579s 00:24:36.082 sys 0m7.470s 00:24:36.082 22:22:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:36.082 22:22:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:36.082 ************************************ 00:24:36.082 END TEST nvmf_perf 00:24:36.082 ************************************ 00:24:36.082 22:22:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:36.082 22:22:00 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:36.082 22:22:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:36.082 22:22:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:36.082 22:22:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:36.082 ************************************ 00:24:36.082 START TEST nvmf_fio_host 00:24:36.082 ************************************ 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:36.082 * Looking for test storage... 00:24:36.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.082 22:22:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:36.083 22:22:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.222 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:44.223 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:44.223 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:44.223 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:44.223 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:44.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:24:44.223 00:24:44.223 --- 10.0.0.2 ping statistics --- 00:24:44.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.223 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:24:44.223 00:24:44.223 --- 10.0.0.1 ping statistics --- 00:24:44.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.223 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2880601 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2880601 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2880601 ']' 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.223 22:22:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.223 [2024-07-15 22:22:08.437263] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:24:44.223 [2024-07-15 22:22:08.437325] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.223 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.223 [2024-07-15 22:22:08.508284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:44.223 [2024-07-15 22:22:08.582827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.223 [2024-07-15 22:22:08.582862] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.223 [2024-07-15 22:22:08.582870] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.223 [2024-07-15 22:22:08.582876] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.223 [2024-07-15 22:22:08.582882] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.223 [2024-07-15 22:22:08.583016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.223 [2024-07-15 22:22:08.583157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.223 [2024-07-15 22:22:08.583284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.223 [2024-07-15 22:22:08.583285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.223 22:22:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.223 22:22:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:24:44.223 22:22:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:44.223 [2024-07-15 22:22:09.361086] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.223 22:22:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:44.223 22:22:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:44.223 22:22:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.224 22:22:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:44.481 Malloc1 00:24:44.481 22:22:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:44.481 22:22:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:44.739 22:22:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.997 [2024-07-15 22:22:10.095418] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.997 22:22:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:44.997 22:22:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:44.997 22:22:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:44.997 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:44.997 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:44.997 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:44.997 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:44.997 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.997 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:44.997 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:44.997 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:44.997 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:44.997 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.997 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:45.278 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:45.278 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:45.278 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:45.278 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:45.278 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:45.278 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:45.278 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:45.278 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:45.278 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:45.278 22:22:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:45.552 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:45.552 fio-3.35 00:24:45.552 Starting 1 thread 00:24:45.552 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.104 00:24:48.104 test: (groupid=0, jobs=1): err= 0: pid=2881149: Mon Jul 15 22:22:13 2024 00:24:48.104 read: IOPS=13.7k, BW=53.7MiB/s (56.3MB/s)(108MiB/2004msec) 00:24:48.104 slat (usec): min=2, max=279, avg= 2.17, stdev= 2.40 00:24:48.104 clat (usec): min=3061, max=10829, avg=5331.54, stdev=901.19 00:24:48.104 lat (usec): min=3063, max=10832, avg=5333.71, stdev=901.28 00:24:48.104 clat percentiles (usec): 00:24:48.104 | 1.00th=[ 3818], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 4752], 00:24:48.104 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5276], 00:24:48.104 | 70.00th=[ 5473], 80.00th=[ 5735], 90.00th=[ 6456], 95.00th=[ 7242], 00:24:48.104 | 99.00th=[ 8717], 99.50th=[ 9110], 99.90th=[10421], 99.95th=[10683], 00:24:48.104 | 99.99th=[10814] 00:24:48.104 bw ( KiB/s): min=53264, max=55664, per=99.94%, avg=54926.00, stdev=1124.42, samples=4 00:24:48.104 iops : min=13316, max=13916, avg=13731.50, stdev=281.10, samples=4 00:24:48.104 write: IOPS=13.7k, BW=53.6MiB/s (56.2MB/s)(107MiB/2004msec); 0 zone resets 00:24:48.104 slat (usec): min=2, max=272, avg= 2.28, stdev= 1.81 00:24:48.104 clat (usec): min=2169, max=7634, avg=3946.28, stdev=526.94 00:24:48.104 lat (usec): min=2171, max=7636, avg=3948.56, stdev=527.02 00:24:48.104 clat percentiles (usec): 00:24:48.104 | 1.00th=[ 2638], 5.00th=[ 2966], 10.00th=[ 3261], 20.00th=[ 3556], 00:24:48.104 | 30.00th=[ 3752], 40.00th=[ 3884], 50.00th=[ 3982], 60.00th=[ 4080], 00:24:48.104 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4752], 00:24:48.104 | 99.00th=[ 5342], 99.50th=[ 5538], 99.90th=[ 6194], 99.95th=[ 6718], 00:24:48.104 | 99.99th=[ 7504] 00:24:48.104 bw ( KiB/s): min=53728, max=55416, per=100.00%, avg=54854.00, stdev=766.20, samples=4 00:24:48.104 iops : min=13432, max=13854, avg=13713.50, stdev=191.55, samples=4 00:24:48.104 lat (msec) : 4=26.58%, 10=73.36%, 20=0.05% 00:24:48.104 cpu : usr=69.50%, sys=24.76%, ctx=30, majf=0, minf=7 00:24:48.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:48.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:48.104 issued rwts: total=27535,27482,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:48.104 00:24:48.104 Run status group 0 (all jobs): 00:24:48.104 READ: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=108MiB (113MB), run=2004-2004msec 00:24:48.104 WRITE: bw=53.6MiB/s (56.2MB/s), 53.6MiB/s-53.6MiB/s (56.2MB/s-56.2MB/s), io=107MiB (113MB), run=2004-2004msec 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:48.104 22:22:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:48.365 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:48.365 fio-3.35 00:24:48.365 Starting 1 thread 00:24:48.365 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.911 00:24:50.911 test: (groupid=0, jobs=1): err= 0: pid=2881956: Mon Jul 15 22:22:15 2024 00:24:50.911 read: IOPS=8722, BW=136MiB/s (143MB/s)(274MiB/2009msec) 00:24:50.911 slat (usec): min=3, max=111, avg= 3.63, stdev= 1.72 00:24:50.911 clat (usec): min=2872, max=23385, avg=9095.94, stdev=2470.28 00:24:50.911 lat (usec): min=2875, max=23389, avg=9099.58, stdev=2470.55 00:24:50.911 clat percentiles (usec): 00:24:50.911 | 1.00th=[ 4424], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 6980], 00:24:50.911 | 30.00th=[ 7635], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9503], 00:24:50.911 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12649], 95.00th=[13566], 00:24:50.911 | 99.00th=[16057], 99.50th=[16909], 99.90th=[17957], 99.95th=[18220], 00:24:50.911 | 99.99th=[18744] 00:24:50.911 bw ( KiB/s): min=60448, max=78656, per=49.89%, avg=69632.00, stdev=8680.01, samples=4 00:24:50.911 iops : min= 3778, max= 4916, avg=4352.00, stdev=542.50, samples=4 00:24:50.911 write: IOPS=5074, BW=79.3MiB/s (83.1MB/s)(141MiB/1783msec); 0 zone resets 00:24:50.911 slat (usec): min=39, max=458, avg=41.21, stdev= 8.94 00:24:50.911 clat (usec): min=1610, max=19156, avg=9800.76, stdev=1768.60 00:24:50.911 lat (usec): min=1650, max=19200, avg=9841.97, stdev=1771.42 00:24:50.911 clat percentiles (usec): 00:24:50.911 | 1.00th=[ 6652], 5.00th=[ 7439], 10.00th=[ 7767], 20.00th=[ 8291], 00:24:50.911 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:24:50.911 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11994], 95.00th=[13042], 00:24:50.911 | 99.00th=[15533], 99.50th=[16057], 99.90th=[17695], 99.95th=[17695], 00:24:50.911 | 99.99th=[19268] 00:24:50.911 bw ( KiB/s): min=62496, max=81472, per=89.15%, avg=72376.00, stdev=8705.47, samples=4 00:24:50.911 iops : min= 3906, max= 5092, avg=4523.50, stdev=544.09, samples=4 00:24:50.911 lat (msec) : 2=0.01%, 4=0.21%, 10=64.89%, 20=34.88%, 50=0.01% 00:24:50.911 cpu : usr=82.62%, sys=13.55%, ctx=14, majf=0, minf=12 00:24:50.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:50.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:50.911 issued rwts: total=17524,9047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:50.911 00:24:50.911 Run status group 0 (all jobs): 00:24:50.911 READ: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=274MiB (287MB), run=2009-2009msec 00:24:50.911 WRITE: bw=79.3MiB/s (83.1MB/s), 79.3MiB/s-79.3MiB/s (83.1MB/s-83.1MB/s), io=141MiB (148MB), run=1783-1783msec 00:24:50.911 22:22:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.911 22:22:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:50.911 22:22:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:50.911 22:22:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:50.911 22:22:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:50.911 22:22:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:50.911 22:22:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:50.911 22:22:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:50.911 22:22:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:50.911 22:22:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:50.911 22:22:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:50.911 rmmod nvme_tcp 00:24:50.911 rmmod nvme_fabrics 00:24:50.911 rmmod nvme_keyring 00:24:50.911 22:22:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:50.911 22:22:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:50.911 22:22:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:50.911 22:22:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2880601 ']' 00:24:50.912 22:22:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2880601 00:24:50.912 22:22:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2880601 ']' 00:24:50.912 22:22:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2880601 00:24:50.912 22:22:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:50.912 22:22:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.912 22:22:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2880601 00:24:50.912 22:22:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:50.912 22:22:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:50.912 22:22:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2880601' 00:24:50.912 killing process with pid 2880601 00:24:50.912 22:22:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2880601 00:24:50.912 22:22:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2880601 00:24:50.912 22:22:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:50.912 22:22:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:50.912 22:22:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:50.912 22:22:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.912 22:22:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:50.912 22:22:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.912 22:22:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.912 22:22:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.457 22:22:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:53.457 00:24:53.457 real 0m17.265s 00:24:53.457 user 1m4.856s 00:24:53.457 sys 0m7.318s 00:24:53.457 22:22:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:53.457 22:22:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.457 ************************************ 00:24:53.457 END TEST nvmf_fio_host 00:24:53.457 ************************************ 00:24:53.457 22:22:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:53.457 22:22:18 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:53.457 22:22:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:53.457 22:22:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.457 22:22:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:53.457 ************************************ 00:24:53.457 START TEST nvmf_failover 00:24:53.457 ************************************ 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:53.457 * Looking for test storage... 00:24:53.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.457 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:53.458 22:22:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:00.050 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:00.051 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:00.051 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:00.051 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:00.051 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:00.051 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.311 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.311 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.311 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:00.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:25:00.311 00:25:00.311 --- 10.0.0.2 ping statistics --- 00:25:00.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.311 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:25:00.311 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:25:00.311 00:25:00.311 --- 10.0.0.1 ping statistics --- 00:25:00.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.312 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2886619 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2886619 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2886619 ']' 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.312 22:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.312 [2024-07-15 22:22:25.571014] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:25:00.312 [2024-07-15 22:22:25.571062] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.312 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.572 [2024-07-15 22:22:25.653346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:00.572 [2024-07-15 22:22:25.717463] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.572 [2024-07-15 22:22:25.717500] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.572 [2024-07-15 22:22:25.717508] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.572 [2024-07-15 22:22:25.717514] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.572 [2024-07-15 22:22:25.717520] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.572 [2024-07-15 22:22:25.717621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.572 [2024-07-15 22:22:25.717776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.572 [2024-07-15 22:22:25.717776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:01.144 22:22:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.144 22:22:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:01.144 22:22:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:01.144 22:22:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:01.144 22:22:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:01.144 22:22:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.144 22:22:26 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:01.406 [2024-07-15 22:22:26.565428] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.406 22:22:26 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:01.666 Malloc0 00:25:01.666 22:22:26 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:01.666 22:22:26 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:01.926 22:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.187 [2024-07-15 22:22:27.275414] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.187 22:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:02.187 [2024-07-15 22:22:27.435805] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:02.187 22:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:02.447 [2024-07-15 22:22:27.600306] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:02.447 22:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2886986 00:25:02.447 22:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:02.447 22:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:02.447 22:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2886986 /var/tmp/bdevperf.sock 00:25:02.447 22:22:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2886986 ']' 00:25:02.447 22:22:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:02.447 22:22:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:02.447 22:22:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:02.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:02.447 22:22:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:02.447 22:22:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:03.387 22:22:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:03.387 22:22:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:03.387 22:22:28 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.387 NVMe0n1 00:25:03.387 22:22:28 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.955 00:25:03.955 22:22:29 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2887322 00:25:03.955 22:22:29 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:03.955 22:22:29 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:04.894 22:22:30 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.156 [2024-07-15 22:22:30.228848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.228996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229022] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229053] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229096] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229194] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.156 [2024-07-15 22:22:30.229212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229238] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229384] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229402] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229420] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 [2024-07-15 22:22:30.229493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d78c50 is same with the state(5) to be set 00:25:05.157 22:22:30 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:08.487 22:22:33 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.487 00:25:08.487 22:22:33 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:08.755 [2024-07-15 22:22:33.797529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.755 [2024-07-15 22:22:33.797574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.755 [2024-07-15 22:22:33.797580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.755 [2024-07-15 22:22:33.797585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.755 [2024-07-15 22:22:33.797590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.755 [2024-07-15 22:22:33.797594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.755 [2024-07-15 22:22:33.797599] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.755 [2024-07-15 22:22:33.797603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.755 [2024-07-15 22:22:33.797607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.755 [2024-07-15 22:22:33.797612] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.755 [2024-07-15 22:22:33.797616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.755 [2024-07-15 22:22:33.797621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.755 [2024-07-15 22:22:33.797630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.755 [2024-07-15 22:22:33.797634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797706] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797773] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797878] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797949] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797965] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797980] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.797999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.798006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.798012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.798017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.798022] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.798028] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.798032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.798038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.798043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.798047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.798053] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.756 [2024-07-15 22:22:33.798059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798104] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798145] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 [2024-07-15 22:22:33.798174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a3a0 is same with the state(5) to be set 00:25:08.757 22:22:33 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:12.058 22:22:36 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.058 [2024-07-15 22:22:36.976829] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.058 22:22:37 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:13.006 22:22:38 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:13.006 [2024-07-15 22:22:38.150944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.150983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.150989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.150994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.150998] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151053] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151088] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.006 [2024-07-15 22:22:38.151097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151112] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151143] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151197] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151239] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 [2024-07-15 22:22:38.151324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7aa80 is same with the state(5) to be set 00:25:13.007 22:22:38 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2887322 00:25:19.604 0 00:25:19.604 22:22:44 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2886986 00:25:19.604 22:22:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2886986 ']' 00:25:19.604 22:22:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2886986 00:25:19.604 22:22:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:19.604 22:22:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:19.604 22:22:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2886986 00:25:19.604 22:22:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:19.604 22:22:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:19.604 22:22:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2886986' 00:25:19.604 killing process with pid 2886986 00:25:19.604 22:22:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2886986 00:25:19.604 22:22:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2886986 00:25:19.604 22:22:44 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:19.604 [2024-07-15 22:22:27.666475] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:25:19.604 [2024-07-15 22:22:27.666531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886986 ] 00:25:19.604 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.604 [2024-07-15 22:22:27.725526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.604 [2024-07-15 22:22:27.789336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.604 Running I/O for 15 seconds... 00:25:19.604 [2024-07-15 22:22:30.230503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-07-15 22:22:30.230919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-07-15 22:22:30.230928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.230937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.230944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.230953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.230960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.230969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.230976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.230985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.230992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.605 [2024-07-15 22:22:30.231618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.605 [2024-07-15 22:22:30.231627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.606 [2024-07-15 22:22:30.231635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.606 [2024-07-15 22:22:30.231651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.606 [2024-07-15 22:22:30.231668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.606 [2024-07-15 22:22:30.231684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.606 [2024-07-15 22:22:30.231700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.606 [2024-07-15 22:22:30.231960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.231985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.231992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.606 [2024-07-15 22:22:30.232318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.606 [2024-07-15 22:22:30.232325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.607 [2024-07-15 22:22:30.232626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.607 [2024-07-15 22:22:30.232656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.607 [2024-07-15 22:22:30.232663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97720 len:8 PRP1 0x0 PRP2 0x0 00:25:19.607 [2024-07-15 22:22:30.232671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232708] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc10300 was disconnected and freed. reset controller. 00:25:19.607 [2024-07-15 22:22:30.232718] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:19.607 [2024-07-15 22:22:30.232738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.607 [2024-07-15 22:22:30.232746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.607 [2024-07-15 22:22:30.232762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.607 [2024-07-15 22:22:30.232778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.607 [2024-07-15 22:22:30.232793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:30.232800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.607 [2024-07-15 22:22:30.236376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.607 [2024-07-15 22:22:30.236401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbeeef0 (9): Bad file descriptor 00:25:19.607 [2024-07-15 22:22:30.272838] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:19.607 [2024-07-15 22:22:33.798911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.607 [2024-07-15 22:22:33.798947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:33.798962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.607 [2024-07-15 22:22:33.798971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:33.798986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.607 [2024-07-15 22:22:33.798993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:33.799002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.607 [2024-07-15 22:22:33.799009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:33.799019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.607 [2024-07-15 22:22:33.799026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:33.799035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.607 [2024-07-15 22:22:33.799042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:33.799051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.607 [2024-07-15 22:22:33.799058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:33.799068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.607 [2024-07-15 22:22:33.799075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:33.799085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.607 [2024-07-15 22:22:33.799092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:33.799101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.607 [2024-07-15 22:22:33.799107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:33.799117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.607 [2024-07-15 22:22:33.799131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:33.799141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.607 [2024-07-15 22:22:33.799148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:33.799157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.607 [2024-07-15 22:22:33.799163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:33.799172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.607 [2024-07-15 22:22:33.799180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:33.799189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.607 [2024-07-15 22:22:33.799197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.607 [2024-07-15 22:22:33.799206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.607 [2024-07-15 22:22:33.799213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.608 [2024-07-15 22:22:33.799775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.608 [2024-07-15 22:22:33.799782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.799793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.609 [2024-07-15 22:22:33.799801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.799810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.609 [2024-07-15 22:22:33.799819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.799828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.609 [2024-07-15 22:22:33.799835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.799844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.609 [2024-07-15 22:22:33.799852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.799861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.609 [2024-07-15 22:22:33.799868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.799877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.609 [2024-07-15 22:22:33.799884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.799893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.799900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.799909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.799916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.799925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.799932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.799941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.799948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.799957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.799964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.799972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.799979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.799988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.799996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.609 [2024-07-15 22:22:33.800479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.609 [2024-07-15 22:22:33.800488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.610 [2024-07-15 22:22:33.800495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.610 [2024-07-15 22:22:33.800512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.610 [2024-07-15 22:22:33.800527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.610 [2024-07-15 22:22:33.800546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.610 [2024-07-15 22:22:33.800563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.610 [2024-07-15 22:22:33.800579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.610 [2024-07-15 22:22:33.800596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.610 [2024-07-15 22:22:33.800612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.610 [2024-07-15 22:22:33.800628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.610 [2024-07-15 22:22:33.800645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.610 [2024-07-15 22:22:33.800662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.610 [2024-07-15 22:22:33.800679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.610 [2024-07-15 22:22:33.800695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.610 [2024-07-15 22:22:33.800710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.800989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.800998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.801005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.801014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.801021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.801031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.610 [2024-07-15 22:22:33.801037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.801055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.610 [2024-07-15 22:22:33.801062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.610 [2024-07-15 22:22:33.801068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53608 len:8 PRP1 0x0 PRP2 0x0 00:25:19.610 [2024-07-15 22:22:33.801079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.801115] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc12270 was disconnected and freed. reset controller. 00:25:19.610 [2024-07-15 22:22:33.801130] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:19.610 [2024-07-15 22:22:33.801149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.610 [2024-07-15 22:22:33.801157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.801166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.610 [2024-07-15 22:22:33.801173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.801181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.610 [2024-07-15 22:22:33.801187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.801195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.610 [2024-07-15 22:22:33.801202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.610 [2024-07-15 22:22:33.801209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.610 [2024-07-15 22:22:33.801232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbeeef0 (9): Bad file descriptor 00:25:19.610 [2024-07-15 22:22:33.804781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.610 [2024-07-15 22:22:33.845649] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:19.611 [2024-07-15 22:22:38.152056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.611 [2024-07-15 22:22:38.152783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.611 [2024-07-15 22:22:38.152792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.152799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.152808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.152817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.152826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.152833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.152842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.152849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.152858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.152865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.152875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.152882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.152891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.152898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.152907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.152914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.152923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.152930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.152939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.152947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.152955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.152963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.152972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.152979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.152987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.152995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.612 [2024-07-15 22:22:38.153389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.612 [2024-07-15 22:22:38.153398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.613 [2024-07-15 22:22:38.153405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.613 [2024-07-15 22:22:38.153421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.613 [2024-07-15 22:22:38.153439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.613 [2024-07-15 22:22:38.153455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.613 [2024-07-15 22:22:38.153471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.613 [2024-07-15 22:22:38.153487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.613 [2024-07-15 22:22:38.153503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.613 [2024-07-15 22:22:38.153518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.613 [2024-07-15 22:22:38.153535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.613 [2024-07-15 22:22:38.153550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.153991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.153998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.154007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.154014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.154024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.154031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.154041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.154048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.154056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.154064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.613 [2024-07-15 22:22:38.154073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.613 [2024-07-15 22:22:38.154079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.614 [2024-07-15 22:22:38.154088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.614 [2024-07-15 22:22:38.154095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.614 [2024-07-15 22:22:38.154103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.614 [2024-07-15 22:22:38.154111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.614 [2024-07-15 22:22:38.154120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.614 [2024-07-15 22:22:38.154131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.614 [2024-07-15 22:22:38.154139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.614 [2024-07-15 22:22:38.154146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.614 [2024-07-15 22:22:38.154155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.614 [2024-07-15 22:22:38.154161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.614 [2024-07-15 22:22:38.154184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.614 [2024-07-15 22:22:38.154191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.614 [2024-07-15 22:22:38.154198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78800 len:8 PRP1 0x0 PRP2 0x0 00:25:19.614 [2024-07-15 22:22:38.154206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.614 [2024-07-15 22:22:38.154242] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc12f20 was disconnected and freed. reset controller. 00:25:19.614 [2024-07-15 22:22:38.154252] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:19.614 [2024-07-15 22:22:38.154271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.614 [2024-07-15 22:22:38.154279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.614 [2024-07-15 22:22:38.154288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.614 [2024-07-15 22:22:38.154295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.614 [2024-07-15 22:22:38.154306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.614 [2024-07-15 22:22:38.154313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.614 [2024-07-15 22:22:38.154321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.614 [2024-07-15 22:22:38.154328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.614 [2024-07-15 22:22:38.154336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.614 [2024-07-15 22:22:38.157913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.614 [2024-07-15 22:22:38.157941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbeeef0 (9): Bad file descriptor 00:25:19.614 [2024-07-15 22:22:38.201380] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:19.614 00:25:19.614 Latency(us) 00:25:19.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.614 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:19.614 Verification LBA range: start 0x0 length 0x4000 00:25:19.614 NVMe0n1 : 15.00 11791.44 46.06 288.77 0.00 10567.82 1058.13 19223.89 00:25:19.614 =================================================================================================================== 00:25:19.614 Total : 11791.44 46.06 288.77 0.00 10567.82 1058.13 19223.89 00:25:19.614 Received shutdown signal, test time was about 15.000000 seconds 00:25:19.614 00:25:19.614 Latency(us) 00:25:19.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.614 =================================================================================================================== 00:25:19.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.614 22:22:44 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:19.614 22:22:44 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:19.614 22:22:44 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:19.614 22:22:44 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2890259 00:25:19.614 22:22:44 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2890259 /var/tmp/bdevperf.sock 00:25:19.614 22:22:44 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:19.614 22:22:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2890259 ']' 00:25:19.614 22:22:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:19.614 22:22:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:19.614 22:22:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:19.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:19.614 22:22:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:19.614 22:22:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:20.186 22:22:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:20.186 22:22:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:20.186 22:22:45 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:20.186 [2024-07-15 22:22:45.386586] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:20.186 22:22:45 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:20.448 [2024-07-15 22:22:45.546947] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:20.448 22:22:45 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:20.709 NVMe0n1 00:25:20.709 22:22:45 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:20.971 00:25:20.971 22:22:46 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:21.233 00:25:21.233 22:22:46 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:21.233 22:22:46 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:21.494 22:22:46 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:21.753 22:22:46 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:25.046 22:22:49 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:25.046 22:22:49 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:25.046 22:22:50 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2891353 00:25:25.046 22:22:50 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2891353 00:25:25.046 22:22:50 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:25.996 0 00:25:25.996 22:22:51 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:25.996 [2024-07-15 22:22:44.483158] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:25:25.996 [2024-07-15 22:22:44.483236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890259 ] 00:25:25.996 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.996 [2024-07-15 22:22:44.544337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.996 [2024-07-15 22:22:44.607193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.996 [2024-07-15 22:22:46.828612] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:25.996 [2024-07-15 22:22:46.828654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.996 [2024-07-15 22:22:46.828665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.996 [2024-07-15 22:22:46.828674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.996 [2024-07-15 22:22:46.828681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.996 [2024-07-15 22:22:46.828689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.996 [2024-07-15 22:22:46.828696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.996 [2024-07-15 22:22:46.828704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.996 [2024-07-15 22:22:46.828711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.996 [2024-07-15 22:22:46.828719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:25.996 [2024-07-15 22:22:46.828746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:25.996 [2024-07-15 22:22:46.828760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba9ef0 (9): Bad file descriptor 00:25:25.996 [2024-07-15 22:22:46.841474] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:25.996 Running I/O for 1 seconds... 00:25:25.996 00:25:25.996 Latency(us) 00:25:25.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.996 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:25.996 Verification LBA range: start 0x0 length 0x4000 00:25:25.996 NVMe0n1 : 1.01 11489.02 44.88 0.00 0.00 11080.19 1078.61 11905.71 00:25:25.996 =================================================================================================================== 00:25:25.996 Total : 11489.02 44.88 0.00 0.00 11080.19 1078.61 11905.71 00:25:25.996 22:22:51 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:25.996 22:22:51 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:26.259 22:22:51 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:26.259 22:22:51 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.259 22:22:51 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:26.520 22:22:51 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:26.520 22:22:51 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:29.819 22:22:54 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:29.819 22:22:54 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:29.819 22:22:55 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2890259 00:25:29.819 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2890259 ']' 00:25:29.819 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2890259 00:25:29.819 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:29.819 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:29.819 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2890259 00:25:29.819 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:29.819 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:29.819 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2890259' 00:25:29.819 killing process with pid 2890259 00:25:29.819 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2890259 00:25:29.819 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2890259 00:25:30.081 22:22:55 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:30.081 22:22:55 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:30.081 22:22:55 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:30.081 22:22:55 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:30.081 22:22:55 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:30.081 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:30.081 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:30.081 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:30.081 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:30.081 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:30.081 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:30.081 rmmod nvme_tcp 00:25:30.081 rmmod nvme_fabrics 00:25:30.081 rmmod nvme_keyring 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2886619 ']' 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2886619 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2886619 ']' 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2886619 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2886619 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2886619' 00:25:30.341 killing process with pid 2886619 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2886619 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2886619 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.341 22:22:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.925 22:22:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:32.925 00:25:32.925 real 0m39.321s 00:25:32.925 user 2m2.072s 00:25:32.925 sys 0m7.962s 00:25:32.925 22:22:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:32.925 22:22:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:32.925 ************************************ 00:25:32.925 END TEST nvmf_failover 00:25:32.925 ************************************ 00:25:32.925 22:22:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:32.925 22:22:57 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:32.925 22:22:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:32.925 22:22:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:32.925 22:22:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:32.925 ************************************ 00:25:32.925 START TEST nvmf_host_discovery 00:25:32.925 ************************************ 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:32.925 * Looking for test storage... 00:25:32.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.925 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:32.926 22:22:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.517 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:39.518 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:39.518 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:39.518 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:39.518 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.518 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.779 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.779 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.779 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:39.779 22:23:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.779 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.779 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.779 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:39.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:25:39.779 00:25:39.779 --- 10.0.0.2 ping statistics --- 00:25:39.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.779 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:25:39.779 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.378 ms 00:25:39.779 00:25:39.779 --- 10.0.0.1 ping statistics --- 00:25:39.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.779 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:25:39.779 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.779 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:39.779 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:39.779 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.779 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:39.779 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:39.779 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.779 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:39.779 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:40.041 22:23:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:40.041 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:40.041 22:23:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:40.041 22:23:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.041 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2896392 00:25:40.041 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:40.041 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2896392 00:25:40.041 22:23:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2896392 ']' 00:25:40.041 22:23:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.041 22:23:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:40.041 22:23:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.041 22:23:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:40.041 22:23:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.041 [2024-07-15 22:23:05.200677] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:25:40.041 [2024-07-15 22:23:05.200745] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.041 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.041 [2024-07-15 22:23:05.271009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.041 [2024-07-15 22:23:05.365056] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.041 [2024-07-15 22:23:05.365113] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.041 [2024-07-15 22:23:05.365133] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.041 [2024-07-15 22:23:05.365141] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.041 [2024-07-15 22:23:05.365147] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.041 [2024-07-15 22:23:05.365172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.984 22:23:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:40.984 22:23:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:40.984 22:23:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:40.984 22:23:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:40.984 22:23:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.984 [2024-07-15 22:23:06.038740] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.984 [2024-07-15 22:23:06.050919] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.984 null0 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.984 null1 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2896715 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2896715 /tmp/host.sock 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2896715 ']' 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:40.984 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:40.984 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.984 [2024-07-15 22:23:06.144588] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:25:40.985 [2024-07-15 22:23:06.144650] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896715 ] 00:25:40.985 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.985 [2024-07-15 22:23:06.208682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.985 [2024-07-15 22:23:06.282434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.927 22:23:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:41.927 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.928 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.188 [2024-07-15 22:23:07.278036] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.188 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:42.189 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:42.189 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:42.189 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:42.189 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.189 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.189 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.189 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.189 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.189 22:23:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.189 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.189 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:25:42.189 22:23:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:42.760 [2024-07-15 22:23:07.988358] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:42.760 [2024-07-15 22:23:07.988379] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:42.760 [2024-07-15 22:23:07.988393] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:42.760 [2024-07-15 22:23:08.076675] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:43.020 [2024-07-15 22:23:08.139261] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:43.020 [2024-07-15 22:23:08.139283] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.307 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.568 22:23:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.829 [2024-07-15 22:23:09.070762] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:43.829 [2024-07-15 22:23:09.070989] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:43.829 [2024-07-15 22:23:09.071016] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.829 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.089 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.089 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:44.090 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.090 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:44.090 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:44.090 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.090 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.090 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:44.090 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:44.090 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:44.090 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.090 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.090 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:44.090 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:44.090 22:23:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:44.090 [2024-07-15 22:23:09.199788] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:44.090 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.090 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:44.090 22:23:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:44.090 [2024-07-15 22:23:09.262391] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:44.090 [2024-07-15 22:23:09.262408] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:44.090 [2024-07-15 22:23:09.262414] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:45.028 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:45.029 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:45.029 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.029 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.029 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.029 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:45.029 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:45.029 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:45.029 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.029 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:45.029 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.029 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.290 [2024-07-15 22:23:10.354756] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:45.290 [2024-07-15 22:23:10.354780] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:45.290 [2024-07-15 22:23:10.363317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.290 [2024-07-15 22:23:10.363338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.290 [2024-07-15 22:23:10.363349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.290 [2024-07-15 22:23:10.363357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.290 [2024-07-15 22:23:10.363365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.290 [2024-07-15 22:23:10.363372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.290 [2024-07-15 22:23:10.363380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.290 [2024-07-15 22:23:10.363387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.290 [2024-07-15 22:23:10.363400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d969b0 is same with the state(5) to be set 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:45.290 [2024-07-15 22:23:10.373329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d969b0 (9): Bad file descriptor 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.290 [2024-07-15 22:23:10.383368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.290 [2024-07-15 22:23:10.383847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.290 [2024-07-15 22:23:10.383863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d969b0 with addr=10.0.0.2, port=4420 00:25:45.290 [2024-07-15 22:23:10.383871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d969b0 is same with the state(5) to be set 00:25:45.290 [2024-07-15 22:23:10.383883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d969b0 (9): Bad file descriptor 00:25:45.290 [2024-07-15 22:23:10.383906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.290 [2024-07-15 22:23:10.383914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.290 [2024-07-15 22:23:10.383922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.290 [2024-07-15 22:23:10.383933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.290 [2024-07-15 22:23:10.393424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.290 [2024-07-15 22:23:10.393736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.290 [2024-07-15 22:23:10.393749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d969b0 with addr=10.0.0.2, port=4420 00:25:45.290 [2024-07-15 22:23:10.393756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d969b0 is same with the state(5) to be set 00:25:45.290 [2024-07-15 22:23:10.393767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d969b0 (9): Bad file descriptor 00:25:45.290 [2024-07-15 22:23:10.393777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.290 [2024-07-15 22:23:10.393783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.290 [2024-07-15 22:23:10.393790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.290 [2024-07-15 22:23:10.393801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.290 [2024-07-15 22:23:10.403479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.290 [2024-07-15 22:23:10.403811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.290 [2024-07-15 22:23:10.403824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d969b0 with addr=10.0.0.2, port=4420 00:25:45.290 [2024-07-15 22:23:10.403832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d969b0 is same with the state(5) to be set 00:25:45.290 [2024-07-15 22:23:10.403843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d969b0 (9): Bad file descriptor 00:25:45.290 [2024-07-15 22:23:10.403857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.290 [2024-07-15 22:23:10.403864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.290 [2024-07-15 22:23:10.403871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.290 [2024-07-15 22:23:10.403882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.290 [2024-07-15 22:23:10.413534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.290 [2024-07-15 22:23:10.413945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.290 [2024-07-15 22:23:10.413957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d969b0 with addr=10.0.0.2, port=4420 00:25:45.290 [2024-07-15 22:23:10.413965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d969b0 is same with the state(5) to be set 00:25:45.290 [2024-07-15 22:23:10.413975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d969b0 (9): Bad file descriptor 00:25:45.290 [2024-07-15 22:23:10.413986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.290 [2024-07-15 22:23:10.413992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.290 [2024-07-15 22:23:10.413999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.290 [2024-07-15 22:23:10.414015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.290 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:45.290 [2024-07-15 22:23:10.423586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.290 [2024-07-15 22:23:10.424028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.290 [2024-07-15 22:23:10.424040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d969b0 with addr=10.0.0.2, port=4420 00:25:45.290 [2024-07-15 22:23:10.424047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d969b0 is same with the state(5) to be set 00:25:45.290 [2024-07-15 22:23:10.424058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d969b0 (9): Bad file descriptor 00:25:45.290 [2024-07-15 22:23:10.424901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.290 [2024-07-15 22:23:10.424913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.290 [2024-07-15 22:23:10.424924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.290 [2024-07-15 22:23:10.424934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.290 [2024-07-15 22:23:10.433641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.290 [2024-07-15 22:23:10.434078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.290 [2024-07-15 22:23:10.434091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d969b0 with addr=10.0.0.2, port=4420 00:25:45.290 [2024-07-15 22:23:10.434098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d969b0 is same with the state(5) to be set 00:25:45.290 [2024-07-15 22:23:10.434109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d969b0 (9): Bad file descriptor 00:25:45.290 [2024-07-15 22:23:10.434129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.290 [2024-07-15 22:23:10.434137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.291 [2024-07-15 22:23:10.434143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.291 [2024-07-15 22:23:10.434154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.291 [2024-07-15 22:23:10.443157] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:45.291 [2024-07-15 22:23:10.443174] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:45.291 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.551 22:23:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.490 [2024-07-15 22:23:11.757182] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:46.490 [2024-07-15 22:23:11.757200] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:46.490 [2024-07-15 22:23:11.757212] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:46.750 [2024-07-15 22:23:11.845503] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:47.011 [2024-07-15 22:23:12.119300] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:47.011 [2024-07-15 22:23:12.119329] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.011 request: 00:25:47.011 { 00:25:47.011 "name": "nvme", 00:25:47.011 "trtype": "tcp", 00:25:47.011 "traddr": "10.0.0.2", 00:25:47.011 "adrfam": "ipv4", 00:25:47.011 "trsvcid": "8009", 00:25:47.011 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:47.011 "wait_for_attach": true, 00:25:47.011 "method": "bdev_nvme_start_discovery", 00:25:47.011 "req_id": 1 00:25:47.011 } 00:25:47.011 Got JSON-RPC error response 00:25:47.011 response: 00:25:47.011 { 00:25:47.011 "code": -17, 00:25:47.011 "message": "File exists" 00:25:47.011 } 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:47.011 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.012 request: 00:25:47.012 { 00:25:47.012 "name": "nvme_second", 00:25:47.012 "trtype": "tcp", 00:25:47.012 "traddr": "10.0.0.2", 00:25:47.012 "adrfam": "ipv4", 00:25:47.012 "trsvcid": "8009", 00:25:47.012 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:47.012 "wait_for_attach": true, 00:25:47.012 "method": "bdev_nvme_start_discovery", 00:25:47.012 "req_id": 1 00:25:47.012 } 00:25:47.012 Got JSON-RPC error response 00:25:47.012 response: 00:25:47.012 { 00:25:47.012 "code": -17, 00:25:47.012 "message": "File exists" 00:25:47.012 } 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.012 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:47.273 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.273 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:47.273 22:23:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:47.273 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:47.273 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:47.273 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:47.273 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.273 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:47.273 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.273 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:47.273 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.273 22:23:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.213 [2024-07-15 22:23:13.386905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.213 [2024-07-15 22:23:13.386934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd4ec0 with addr=10.0.0.2, port=8010 00:25:48.213 [2024-07-15 22:23:13.386947] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:48.213 [2024-07-15 22:23:13.386954] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:48.213 [2024-07-15 22:23:13.386961] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:49.154 [2024-07-15 22:23:14.389168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.154 [2024-07-15 22:23:14.389190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd4ec0 with addr=10.0.0.2, port=8010 00:25:49.154 [2024-07-15 22:23:14.389202] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:49.154 [2024-07-15 22:23:14.389208] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:49.154 [2024-07-15 22:23:14.389214] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:50.096 [2024-07-15 22:23:15.391161] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:50.096 request: 00:25:50.096 { 00:25:50.096 "name": "nvme_second", 00:25:50.096 "trtype": "tcp", 00:25:50.096 "traddr": "10.0.0.2", 00:25:50.096 "adrfam": "ipv4", 00:25:50.096 "trsvcid": "8010", 00:25:50.096 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:50.096 "wait_for_attach": false, 00:25:50.096 "attach_timeout_ms": 3000, 00:25:50.096 "method": "bdev_nvme_start_discovery", 00:25:50.096 "req_id": 1 00:25:50.096 } 00:25:50.096 Got JSON-RPC error response 00:25:50.096 response: 00:25:50.096 { 00:25:50.096 "code": -110, 00:25:50.096 "message": "Connection timed out" 00:25:50.096 } 00:25:50.096 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:50.096 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:50.096 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:50.096 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:50.096 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:50.096 22:23:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:50.096 22:23:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:50.096 22:23:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:50.096 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.096 22:23:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:50.096 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.096 22:23:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:50.096 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2896715 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.357 rmmod nvme_tcp 00:25:50.357 rmmod nvme_fabrics 00:25:50.357 rmmod nvme_keyring 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2896392 ']' 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2896392 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2896392 ']' 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2896392 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2896392 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2896392' 00:25:50.357 killing process with pid 2896392 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2896392 00:25:50.357 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2896392 00:25:50.617 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:50.617 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:50.617 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:50.617 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.617 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.617 22:23:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.617 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.617 22:23:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.531 22:23:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:52.531 00:25:52.531 real 0m19.995s 00:25:52.531 user 0m23.662s 00:25:52.531 sys 0m6.810s 00:25:52.531 22:23:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:52.531 22:23:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.531 ************************************ 00:25:52.531 END TEST nvmf_host_discovery 00:25:52.531 ************************************ 00:25:52.531 22:23:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:52.531 22:23:17 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:52.531 22:23:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:52.531 22:23:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:52.531 22:23:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:52.531 ************************************ 00:25:52.531 START TEST nvmf_host_multipath_status 00:25:52.531 ************************************ 00:25:52.531 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:52.792 * Looking for test storage... 00:25:52.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:52.792 22:23:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:59.412 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.412 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:59.413 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:59.413 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:59.413 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.413 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:59.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:25:59.675 00:25:59.675 --- 10.0.0.2 ping statistics --- 00:25:59.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.675 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:25:59.675 00:25:59.675 --- 10.0.0.1 ping statistics --- 00:25:59.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.675 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:59.675 22:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:59.936 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:59.936 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:59.936 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:59.936 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:59.936 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2902741 00:25:59.936 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2902741 00:25:59.936 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2902741 ']' 00:25:59.936 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.936 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:59.936 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.936 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:59.936 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:59.936 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:59.936 [2024-07-15 22:23:25.092635] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:25:59.936 [2024-07-15 22:23:25.092702] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.936 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.936 [2024-07-15 22:23:25.162751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:59.936 [2024-07-15 22:23:25.237521] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.936 [2024-07-15 22:23:25.237558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.936 [2024-07-15 22:23:25.237565] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.936 [2024-07-15 22:23:25.237572] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.936 [2024-07-15 22:23:25.237578] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.936 [2024-07-15 22:23:25.237739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.936 [2024-07-15 22:23:25.237741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.879 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.879 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:00.879 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:00.879 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:00.879 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:00.879 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.879 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2902741 00:26:00.879 22:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:00.879 [2024-07-15 22:23:26.033613] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.879 22:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:01.140 Malloc0 00:26:01.140 22:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:01.140 22:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:01.401 22:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.401 [2024-07-15 22:23:26.644821] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.401 22:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:01.663 [2024-07-15 22:23:26.797176] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:01.663 22:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2903109 00:26:01.663 22:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:01.663 22:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:01.663 22:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2903109 /var/tmp/bdevperf.sock 00:26:01.663 22:23:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2903109 ']' 00:26:01.663 22:23:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:01.663 22:23:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:01.663 22:23:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:01.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:01.663 22:23:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:01.663 22:23:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:02.605 22:23:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:02.605 22:23:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:02.605 22:23:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:02.605 22:23:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:02.866 Nvme0n1 00:26:02.866 22:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:03.438 Nvme0n1 00:26:03.438 22:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:03.438 22:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:05.374 22:23:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:05.374 22:23:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:05.635 22:23:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:05.635 22:23:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:07.027 22:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:07.027 22:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:07.027 22:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.027 22:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:07.027 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.027 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:07.027 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.027 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.027 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.027 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.027 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.027 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.287 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.288 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:07.288 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.288 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:07.288 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.288 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:07.288 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.288 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:07.549 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.549 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:07.549 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.549 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:07.809 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.809 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:07.809 22:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:07.810 22:23:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:08.081 22:23:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:09.026 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:09.026 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:09.026 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.026 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:09.287 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:09.287 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:09.287 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.287 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:09.548 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.548 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:09.548 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.548 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:09.548 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.548 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:09.548 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.548 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:09.809 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.809 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:09.809 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.809 22:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:10.071 22:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.071 22:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:10.071 22:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.071 22:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:10.071 22:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.071 22:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:10.071 22:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:10.331 22:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:10.331 22:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:11.717 22:23:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:11.717 22:23:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:11.717 22:23:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.717 22:23:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:11.717 22:23:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.717 22:23:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:11.717 22:23:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.717 22:23:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:11.717 22:23:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:11.717 22:23:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:11.717 22:23:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.717 22:23:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:11.981 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.981 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.981 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.981 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.981 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.981 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.981 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.981 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:12.243 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.243 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:12.243 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.243 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:12.504 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.504 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:12.504 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:12.504 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:12.764 22:23:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:13.707 22:23:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:13.707 22:23:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:13.707 22:23:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.708 22:23:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:13.969 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.969 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:13.969 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.969 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:13.969 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.969 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:13.969 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.969 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.230 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.230 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:14.230 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.230 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:14.491 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.491 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:14.491 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.491 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:14.491 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.491 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:14.491 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.491 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:14.752 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:14.752 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:14.753 22:23:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:15.013 22:23:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:15.013 22:23:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:16.400 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:16.400 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:16.400 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.400 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:16.400 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:16.400 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:16.400 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.400 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:16.400 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:16.400 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:16.400 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.400 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:16.661 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.661 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:16.661 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.661 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:16.922 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.922 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:16.922 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.922 22:23:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:16.922 22:23:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:16.922 22:23:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:16.922 22:23:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.922 22:23:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:17.184 22:23:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.184 22:23:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:17.184 22:23:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:17.184 22:23:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:17.475 22:23:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:18.420 22:23:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:18.420 22:23:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:18.420 22:23:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.420 22:23:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:18.679 22:23:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.679 22:23:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:18.679 22:23:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.679 22:23:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:18.679 22:23:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.679 22:23:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:18.679 22:23:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.679 22:23:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:18.939 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.939 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:18.939 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.939 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:19.200 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.200 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:19.200 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.200 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:19.200 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:19.200 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:19.200 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.200 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:19.461 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.461 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:19.461 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:19.461 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:19.721 22:23:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:19.982 22:23:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:20.923 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:20.923 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:20.923 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.923 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:21.183 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.183 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:21.183 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.183 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:21.183 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.183 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:21.183 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.183 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:21.443 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.443 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:21.443 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.443 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:21.703 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.703 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:21.703 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.703 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:21.703 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.703 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:21.703 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.703 22:23:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:21.963 22:23:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.963 22:23:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:21.963 22:23:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:22.223 22:23:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:22.223 22:23:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:23.161 22:23:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:23.161 22:23:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:23.421 22:23:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.421 22:23:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:23.421 22:23:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.421 22:23:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:23.421 22:23:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.421 22:23:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:23.680 22:23:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.680 22:23:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:23.680 22:23:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.680 22:23:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:23.680 22:23:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.680 22:23:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:23.680 22:23:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.680 22:23:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:23.940 22:23:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.940 22:23:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:23.940 22:23:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.940 22:23:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:24.199 22:23:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.199 22:23:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:24.199 22:23:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.199 22:23:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:24.199 22:23:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.199 22:23:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:24.199 22:23:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:24.462 22:23:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:24.741 22:23:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:25.684 22:23:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:25.684 22:23:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:25.684 22:23:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.684 22:23:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.684 22:23:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.684 22:23:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:25.684 22:23:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.684 22:23:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.944 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.944 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.944 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.944 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:26.204 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.204 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:26.204 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.204 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:26.204 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.204 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:26.204 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.204 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:26.464 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.464 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:26.464 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.464 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:26.724 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.724 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:26.724 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:26.724 22:23:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:26.984 22:23:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:27.927 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:27.927 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:27.927 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.927 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:28.187 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.187 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:28.187 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.187 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:28.187 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.187 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:28.448 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.448 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:28.448 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.448 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:28.448 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.448 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:28.708 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.708 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:28.708 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.708 22:23:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:28.708 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.708 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:28.708 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.708 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:28.968 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.968 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2903109 00:26:28.968 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2903109 ']' 00:26:28.968 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2903109 00:26:28.968 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:28.968 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:28.968 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2903109 00:26:28.968 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:28.968 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:28.968 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2903109' 00:26:28.968 killing process with pid 2903109 00:26:28.968 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2903109 00:26:28.968 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2903109 00:26:29.247 Connection closed with partial response: 00:26:29.247 00:26:29.247 00:26:29.247 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2903109 00:26:29.247 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:29.247 [2024-07-15 22:23:26.860151] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:26:29.247 [2024-07-15 22:23:26.860208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903109 ] 00:26:29.247 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.247 [2024-07-15 22:23:26.909730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.247 [2024-07-15 22:23:26.961863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.247 Running I/O for 90 seconds... 00:26:29.247 [2024-07-15 22:23:40.130244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-07-15 22:23:40.130281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.247 [2024-07-15 22:23:40.130313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-07-15 22:23:40.130319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.247 [2024-07-15 22:23:40.130330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-07-15 22:23:40.130335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.247 [2024-07-15 22:23:40.130346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-07-15 22:23:40.130351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.247 [2024-07-15 22:23:40.130361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-07-15 22:23:40.130366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.247 [2024-07-15 22:23:40.130376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-07-15 22:23:40.130381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.247 [2024-07-15 22:23:40.130392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-07-15 22:23:40.130396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.247 [2024-07-15 22:23:40.130407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-07-15 22:23:40.130413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.247 [2024-07-15 22:23:40.130572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-07-15 22:23:40.130580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.247 [2024-07-15 22:23:40.130592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-07-15 22:23:40.130598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.247 [2024-07-15 22:23:40.130608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-07-15 22:23:40.130619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.247 [2024-07-15 22:23:40.130630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-07-15 22:23:40.130636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.247 [2024-07-15 22:23:40.130647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-07-15 22:23:40.130655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.247 [2024-07-15 22:23:40.130665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-07-15 22:23:40.130670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.247 [2024-07-15 22:23:40.130681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.130687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.130697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.130704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.130737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.130745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.130757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.130762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.130773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.130778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.130789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.130794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.130805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.130810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.130821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.130827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.130838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.130843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.130856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.130861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.132145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.132163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.132478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.132497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.132515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.132533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-07-15 22:23:40.132681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.132699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.248 [2024-07-15 22:23:40.132712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-07-15 22:23:40.132718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.132731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.132736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.132748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.132755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.132768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.132773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.132786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.132790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.132803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.132809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.132822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.132828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.132841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.132846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.132858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.132864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.132877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.132882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.132895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.132900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.132913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.132918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.132931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.132936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.132948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.132953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.132966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.132971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.132984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.132989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.133797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-07-15 22:23:40.133804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.133819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.133826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.133841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.133845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.133859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.133865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.133879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.133884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.133898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.133903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.133919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.133924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.133938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.133943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.133958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.133963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.133978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.133983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.133998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.134003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.134018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.134023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.134038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.134043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.134057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.134063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.134079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.134084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.134099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.134103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.134118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.134127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.134172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-07-15 22:23:40.134178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.134194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.134199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.134215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.134220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.134236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.134241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.134257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.134261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.134277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.134283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.134299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.134304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.134319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.134324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.249 [2024-07-15 22:23:40.134339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-07-15 22:23:40.134345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.134991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.134996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:40.135011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:40.135017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:52.135390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-07-15 22:23:52.135424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:52.135441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-07-15 22:23:52.135447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:52.135457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:52.135463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:52.135473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:52.135478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:52.135488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:52.135493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:52.135507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:52.135512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:52.135523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:52.135528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:52.135538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-15 22:23:52.135543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.250 [2024-07-15 22:23:52.135553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-15 22:23:52.135558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.135569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.135574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.135584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.135589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.135599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.135604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-15 22:23:52.136333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-15 22:23:52.136348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-15 22:23:52.136364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-15 22:23:52.136380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-15 22:23:52.136411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-15 22:23:52.136427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-15 22:23:52.136444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-07-15 22:23:52.136964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-15 22:23:52.136979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.136989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-15 22:23:52.136994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.251 [2024-07-15 22:23:52.137005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.137010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.137026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.137229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.137245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.137262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.137278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.137294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.137309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.137325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.137340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.137356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.137443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.137458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.137475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.137493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.137885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.137901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.137916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.137931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.137947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.137963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.137978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.137989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.137994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.138005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.138010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.138020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.138025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.138885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.138896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.138908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.138914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.138927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.138932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.138942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.138947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.138958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.138963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.138973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.138978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.138988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.138993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.139003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.139009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.139019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.139024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.139035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.139040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.139050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.139055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.139066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.139071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.139082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.139087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.139097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-07-15 22:23:52.139102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.139249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-07-15 22:23:52.139257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.252 [2024-07-15 22:23:52.139268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.139273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.139283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.139288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.139299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.139304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.139315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.139320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.139680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.139690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.139702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.139707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.139718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.139723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.139733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.139738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.139749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.139754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.139765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.139770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.139825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.139833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.139846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.139851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.139862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.139867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.139878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.139883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.139928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.139935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.139946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.139951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.140092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.140100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.140110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.140116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.140133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.140138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.140178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.140186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.140196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.140202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.140212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.140217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.140227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.140233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.140244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.140250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.140261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.140266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.140833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.140843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.140855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.140860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.140870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.140875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.140885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.140892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.140902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.140907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.140918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.140923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.140933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.140938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.141352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.141362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.141373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.141378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.141389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.141394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.141405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.141412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.141422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.141428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.141439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-07-15 22:23:52.141443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.141454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.141459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.141469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.141475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.141485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.141490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.253 [2024-07-15 22:23:52.141618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-07-15 22:23:52.141627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.141638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.141644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.141654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.141659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.141669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.141674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.141746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.141754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.141764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.141769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.141779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.141787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.141797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.141802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.141813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.141818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.141828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.141833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.141972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.141979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.141990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.141996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.142006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.142011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.142021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.142026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.142072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.142079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.142090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.142096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.142107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.142113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.142128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.142134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.142567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.142577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.142591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.142597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.142607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.142612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.142623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.142628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.142639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.142645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.143144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.143161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.143176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.143191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.143207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.143223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.143238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.143254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.143272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.143287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.143304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.143319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.143525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.143541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.143557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.143573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.143583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.143588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.144128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-07-15 22:23:52.144138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.144149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.144154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.254 [2024-07-15 22:23:52.144164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-07-15 22:23:52.144169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.144187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.144203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.144218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.144233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.144250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.144266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.144447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.144463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.144478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.144494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.144511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.144895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.144914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.144930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.144945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.144961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.144971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.144976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.145035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.145042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.145053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.145059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.145520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.145531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.145542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.145548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.145558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.145563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.145574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.145579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.146035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.146055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.146071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.146087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.146104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.146119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.146138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.146154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.146234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.146252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.146269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.146285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.146301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-07-15 22:23:52.146317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.146337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.146352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.146434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.255 [2024-07-15 22:23:52.146446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-07-15 22:23:52.146454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.256 [2024-07-15 22:23:52.147038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-07-15 22:23:52.147048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.256 [2024-07-15 22:23:52.147059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.147064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.147080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.147096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.147111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.147131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.147147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.147162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.147181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.147311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.147326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.147342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.147357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.147373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.147711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.147728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.147888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.147905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.147921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.147966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.147985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.147995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.148000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.148016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.148031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.148046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.148130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.148147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.148161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.148178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.148231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.148246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.148262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.148372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.148444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.148460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.148475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-07-15 22:23:52.148593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.148609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.148619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.148624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.149167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.257 [2024-07-15 22:23:52.149179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.257 [2024-07-15 22:23:52.149191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.149196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.149485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.149493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.149503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.149509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.149520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.149525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.149535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.149540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.149552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.149557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.149569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.149574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.150087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.150096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.150107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.150112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.150127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.150133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.150144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.150148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.150159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.150164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.150174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.150179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.150190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.150195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.150336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.150344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.150932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.150942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.150953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.150959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.150973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.150979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.150989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.150994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.151006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.151012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.151022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.151028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.151039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.151045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.151056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.151061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.151073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.151079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.152065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.152082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.152098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.152114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.152134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.152152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.152167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.152184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.152199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.152215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.152230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.152245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.152260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.152275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.258 [2024-07-15 22:23:52.152292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.152732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.152748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.258 [2024-07-15 22:23:52.152758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.258 [2024-07-15 22:23:52.152765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.152776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.152782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.152792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.152797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.152808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.152813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.152824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.152829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.152839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.152845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.152855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.152860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.152948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.152957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.152967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.152973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.152983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.152988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.152999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.153004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.153019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.153037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.153053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.153130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.153147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.153163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.153178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.153194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.153209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.153225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.153241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.153340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.153357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.153372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.153892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.153909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.153925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.153941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.153958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.153974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.153985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.153991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.154001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.154006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.154017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.154022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.154033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.154038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.154142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.154150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.154161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.154167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.154180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.154185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.154195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.154201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.154247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.259 [2024-07-15 22:23:52.154255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.154266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.154272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.154283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.259 [2024-07-15 22:23:52.154289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.259 [2024-07-15 22:23:52.162034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.162054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.162068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.162074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.162085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.162091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.162102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.162107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.162118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.162129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.162140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.162145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.162156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.162161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.162175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.162181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.162192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.162197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.163377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.163390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.164302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.164319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.164335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.164350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.164366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.164383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.164399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.164415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.164430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.164448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.164464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.164480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.164496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.164512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.164528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.164544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.164560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.164576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.164591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.164608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.164625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.164642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.164659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.164674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.164691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.164706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.164717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.164722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.165336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.260 [2024-07-15 22:23:52.165347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.165358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.165364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.165375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.165380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.165391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.260 [2024-07-15 22:23:52.165397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.260 [2024-07-15 22:23:52.165407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.165412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.165422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.165428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.165439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.165447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.165458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.165463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.165473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.165480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.165490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.165496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.166361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.166380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.166397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.166517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.166533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.166599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.166616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.166697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.166714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.166731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.166747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.166764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.261 [2024-07-15 22:23:52.166846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.261 [2024-07-15 22:23:52.166856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.261 [2024-07-15 22:23:52.166862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.166873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.166879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.166889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.166895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.166905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.166912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.166922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.166928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.167441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.167452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.167463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.167469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.167480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.167485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.167495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.167501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.167511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.167517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.167527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.167533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.167543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.167548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.167559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.167564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.167574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.167580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.167591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.167596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.167606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.167614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.168848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.168862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.168874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.168879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.168889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.168895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.168905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.168910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.168921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.168926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.168936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.168942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.168952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.168957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.168967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.168973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.168983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.168988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.168998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.169003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.169019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.169037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.169053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.169070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.169085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.169101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.169117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.169137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.169152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.169168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.169183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.169199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.169214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.169230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.262 [2024-07-15 22:23:52.169247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.262 [2024-07-15 22:23:52.169263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.262 [2024-07-15 22:23:52.169273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.169279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.169289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.169294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.169305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.169311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.169321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.169327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.169338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.169343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.169355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.169360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.169370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.169375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.169386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.169391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.169402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.169407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.170098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.170117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.170138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.170155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.170170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.170186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.170203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.170218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.170234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.170249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.170266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.170781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.170799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.170817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.170834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.170850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.170866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.170883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.170898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.170914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.170931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.170946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.170963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.170978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.170989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.170994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.171004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.171012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.171022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.171027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.171038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.171044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.171054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.171060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.171070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.171075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.171086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.171092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.171102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.263 [2024-07-15 22:23:52.171108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.263 [2024-07-15 22:23:52.171118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.263 [2024-07-15 22:23:52.171128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.171138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.171144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.171155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.171160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.171171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.171177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.171787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.171796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.171808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.171815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.171826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.171832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.171842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.171848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.171859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.171865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.171875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.171881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.171892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.171898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.171908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.171914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.171925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.171930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.171941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.171946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.171957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.171963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.171973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.171980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.171990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.171996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.172014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.172032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.172049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.172064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.172081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.172097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.172704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.172721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.172738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.172755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.172771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.172787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.172804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.172824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.172840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.172857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.172872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.172888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.172905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.172917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-07-15 22:23:52.172922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.173168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.173177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.173189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.173195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.173206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.173212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.173222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.173229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.173240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.264 [2024-07-15 22:23:52.173245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.264 [2024-07-15 22:23:52.173256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.173264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.173280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.173297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.173313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.173330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.173347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.173364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.173380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.173397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.173414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.173431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.173447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.173465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.173481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.173809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.173826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.173843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.173859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.173875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.173891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.173907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.173924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.173940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.173957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.173975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.173985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.173991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.174003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.174009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.174020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.174025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.174036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.174041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.175076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.175089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.175101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.175107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.175117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.175128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.175139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.175144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.175154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.265 [2024-07-15 22:23:52.175160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.175170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.175176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.175186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-15 22:23:52.175191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.265 [2024-07-15 22:23:52.175202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.266 [2024-07-15 22:23:52.175228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.266 [2024-07-15 22:23:52.175277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.266 [2024-07-15 22:23:52.175293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.266 [2024-07-15 22:23:52.175309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.266 [2024-07-15 22:23:52.175325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.266 [2024-07-15 22:23:52.175392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.175805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.266 [2024-07-15 22:23:52.175822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.266 [2024-07-15 22:23:52.175839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.175850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.266 [2024-07-15 22:23:52.175858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.176673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.266 [2024-07-15 22:23:52.176687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.176699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.176704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.176715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.176720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.176731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.176736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.176746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.176751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.176761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.266 [2024-07-15 22:23:52.176767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.176778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.266 [2024-07-15 22:23:52.176783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.176793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.266 [2024-07-15 22:23:52.176798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.176808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.176813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.176825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.266 [2024-07-15 22:23:52.176830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.176840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.176846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.176856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.176865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.176875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.266 [2024-07-15 22:23:52.176881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.176892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.266 [2024-07-15 22:23:52.176897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.176908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.176914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.266 [2024-07-15 22:23:52.176924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.266 [2024-07-15 22:23:52.176930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.176940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.176946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.176957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.176963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.176974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.176980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.176991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.176997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.177013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.177030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.177046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.177064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.177081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.177283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.177300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.177317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.177333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.177349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.177365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.177381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.177398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.177414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.177502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.177519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.177538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.177549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.177555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.178741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.178758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.178773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.178789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.178804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.178820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.178836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.178851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.178866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.178882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.178901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.178917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.178932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.178948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.178965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.178981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.178991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.178997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.179008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.267 [2024-07-15 22:23:52.179013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.179402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.267 [2024-07-15 22:23:52.179411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.267 [2024-07-15 22:23:52.179422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.179429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.179445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.179461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.179482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.179499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.179515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.179531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.179793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.179810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.179827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.179844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.179860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.179877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.179893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.179909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.179928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.179944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.179960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.179976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.179986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.179992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.180098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.180106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.180117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.180127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.180137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.180144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.180155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.180161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.180171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.180178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.180285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.180294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.180305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.180311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.180321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.180329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.180340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.180346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.180356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.180362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.180373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.180378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.180390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.180395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.181073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.181084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.181095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.181100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.181111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.181116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.181131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.181136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.181146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.181151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.181162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.181168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.181178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.181184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.181194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.181199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.181212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.268 [2024-07-15 22:23:52.181217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.181371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.268 [2024-07-15 22:23:52.181380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.268 [2024-07-15 22:23:52.181391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.181397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.181414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.181596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.181613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.181629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.181645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.181662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.181678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.181694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.181710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.181729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.181824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.181841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.181858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.181875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.181892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.181908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.181925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.181935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.181941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.182151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.182160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.182172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.182177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.182188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.182193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.182204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.182212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.182223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.182229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.182239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.182246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.182304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.182311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.182323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.182328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.182339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.182344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.182355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.182361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.182897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.182908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.182919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.182924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.182935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.182940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.182951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.182956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.182966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.269 [2024-07-15 22:23:52.182972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.182983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.182991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.183001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.269 [2024-07-15 22:23:52.183007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.269 [2024-07-15 22:23:52.183591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.183602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.183613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.183618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.183629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.183634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.183644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.183649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.183659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.183665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.183675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.183681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.183692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.183697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.183707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.183712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.183722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.183727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.183738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.183743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.183754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.183762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.183855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.183864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.183875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.183881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.183892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.183898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.183909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.183915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.184016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.184033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.184050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.184067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.184084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.184153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.184171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.184187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.184208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.184225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.184601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.184618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.184635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.184652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.184669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.184687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.184703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.184806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.184823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.184840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.184860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.184997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.185005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.185017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.185023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.185103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.270 [2024-07-15 22:23:52.185111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.185127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.185133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.185555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.185565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.270 [2024-07-15 22:23:52.185576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.270 [2024-07-15 22:23:52.185581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.185591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.185596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.185606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.185612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.185622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.185627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.185638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.185643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.185654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.185660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.185670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.185677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.185758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.185766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.185777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.185784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.185980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.185988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.185999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.186005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.186091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.186099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.186110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.186116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.186669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.186680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.186691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.186696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.186707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.186712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.186722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.186728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.186738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.186743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.186753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.186761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.186772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.186777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.186787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.186792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.186803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.186808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.186819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.186824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.187333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.187343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.187354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.187359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.187370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.187375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.187385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.187390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.187400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.187405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.187415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.187421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.187431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.187437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.187447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.187454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.187464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.187470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.187480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.187486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.187496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.187502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.187513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.187518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.187529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.187534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.187545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.187551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.188287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.188298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.188309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.188315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.188325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.188330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.188341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.271 [2024-07-15 22:23:52.188346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.188357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.271 [2024-07-15 22:23:52.188362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.271 [2024-07-15 22:23:52.188372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.188378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.188448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.188456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.188467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.188473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.188484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.188490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.188500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.188506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.188517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.188522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.188532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.188538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.188548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.188554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.188565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.188570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.188581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.188587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.189070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.189086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.189102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.189120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.189139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.189156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.189231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.189247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.189263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.189280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.189296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.189313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.189376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.189393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.189409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.189428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.189527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.189544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.189561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.189577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.189593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.189604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.189609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.190019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.190029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.190040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.190047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.190057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.190062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.190105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.190113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.190129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.190135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.190146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.190153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.190164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.190170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.190181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.272 [2024-07-15 22:23:52.190187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.190372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.272 [2024-07-15 22:23:52.190380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.272 [2024-07-15 22:23:52.190392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.190397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.190477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.273 [2024-07-15 22:23:52.190484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.190495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.273 [2024-07-15 22:23:52.190501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.190512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.190517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.190904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.273 [2024-07-15 22:23:52.190914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.190925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.273 [2024-07-15 22:23:52.190930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.190969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.273 [2024-07-15 22:23:52.190975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.190986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.190992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.191010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.191026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.191041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.191058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.191135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.273 [2024-07-15 22:23:52.191245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.191262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.273 [2024-07-15 22:23:52.191455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.273 [2024-07-15 22:23:52.191783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.273 [2024-07-15 22:23:52.191800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.273 [2024-07-15 22:23:52.191815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.191831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.191846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.191864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.191931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.273 [2024-07-15 22:23:52.191948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.191988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.191996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.192007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.192013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.192024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.192030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.192095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.192102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.192712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.273 [2024-07-15 22:23:52.192726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.192746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.192752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.192763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.192768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.192806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.273 [2024-07-15 22:23:52.192814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.192826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.273 [2024-07-15 22:23:52.192831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.193130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.273 [2024-07-15 22:23:52.193139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.193150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.273 [2024-07-15 22:23:52.193155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.273 [2024-07-15 22:23:52.193166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.273 [2024-07-15 22:23:52.193171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.274 [2024-07-15 22:23:52.193633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.274 [2024-07-15 22:23:52.193643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.274 [2024-07-15 22:23:52.193654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.274 [2024-07-15 22:23:52.193660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.274 [2024-07-15 22:23:52.193670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.274 [2024-07-15 22:23:52.193675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.274 [2024-07-15 22:23:52.193685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.274 [2024-07-15 22:23:52.193690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.274 [2024-07-15 22:23:52.193700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.274 [2024-07-15 22:23:52.193705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.274 [2024-07-15 22:23:52.193715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.274 [2024-07-15 22:23:52.193721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.274 Received shutdown signal, test time was about 25.542493 seconds 00:26:29.274 00:26:29.274 Latency(us) 00:26:29.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.274 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:29.274 Verification LBA range: start 0x0 length 0x4000 00:26:29.274 Nvme0n1 : 25.54 10943.69 42.75 0.00 0.00 11677.40 363.52 3019898.88 00:26:29.274 =================================================================================================================== 00:26:29.274 Total : 10943.69 42.75 0.00 0.00 11677.40 363.52 3019898.88 00:26:29.274 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.274 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:29.274 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:29.274 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:29.274 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:29.274 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:29.274 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:29.274 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:29.274 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:29.274 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:29.274 rmmod nvme_tcp 00:26:29.533 rmmod nvme_fabrics 00:26:29.533 rmmod nvme_keyring 00:26:29.533 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:29.533 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:29.533 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:29.533 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2902741 ']' 00:26:29.533 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2902741 00:26:29.533 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2902741 ']' 00:26:29.533 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2902741 00:26:29.533 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:29.533 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:29.533 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2902741 00:26:29.533 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:29.533 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:29.533 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2902741' 00:26:29.533 killing process with pid 2902741 00:26:29.533 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2902741 00:26:29.533 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2902741 00:26:29.534 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:29.534 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:29.534 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:29.534 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:29.534 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:29.534 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.534 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:29.534 22:23:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.076 22:23:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:32.076 00:26:32.076 real 0m39.037s 00:26:32.077 user 1m40.919s 00:26:32.077 sys 0m10.556s 00:26:32.077 22:23:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:32.077 22:23:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:32.077 ************************************ 00:26:32.077 END TEST nvmf_host_multipath_status 00:26:32.077 ************************************ 00:26:32.077 22:23:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:32.077 22:23:56 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:32.077 22:23:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:32.077 22:23:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:32.077 22:23:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:32.077 ************************************ 00:26:32.077 START TEST nvmf_discovery_remove_ifc 00:26:32.077 ************************************ 00:26:32.077 22:23:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:32.077 * Looking for test storage... 00:26:32.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:32.077 22:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:38.738 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:38.738 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:38.738 22:24:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:38.738 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.738 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:38.738 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.738 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:38.738 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:38.738 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:38.739 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:38.739 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:38.739 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:38.999 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:38.999 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.999 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:38.999 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:38.999 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:38.999 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:38.999 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:38.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:38.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:26:38.999 00:26:38.999 --- 10.0.0.2 ping statistics --- 00:26:38.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.999 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:26:38.999 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:38.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:38.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:26:38.999 00:26:38.999 --- 10.0.0.1 ping statistics --- 00:26:38.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.999 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:26:38.999 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:38.999 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:38.999 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:38.999 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:38.999 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:38.999 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:39.000 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.000 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:39.000 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:39.261 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:39.261 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:39.261 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:39.261 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.261 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2912902 00:26:39.261 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2912902 00:26:39.261 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:39.261 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2912902 ']' 00:26:39.261 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.262 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:39.262 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.262 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:39.262 22:24:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.262 [2024-07-15 22:24:04.421244] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:26:39.262 [2024-07-15 22:24:04.421310] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.262 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.262 [2024-07-15 22:24:04.510839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.523 [2024-07-15 22:24:04.603335] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.523 [2024-07-15 22:24:04.603394] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.523 [2024-07-15 22:24:04.603402] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.523 [2024-07-15 22:24:04.603408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.523 [2024-07-15 22:24:04.603414] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.523 [2024-07-15 22:24:04.603442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.096 [2024-07-15 22:24:05.270620] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.096 [2024-07-15 22:24:05.278868] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:40.096 null0 00:26:40.096 [2024-07-15 22:24:05.310787] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2913030 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2913030 /tmp/host.sock 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2913030 ']' 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:40.096 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:40.096 22:24:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.096 [2024-07-15 22:24:05.390089] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:26:40.096 [2024-07-15 22:24:05.390185] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2913030 ] 00:26:40.357 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.357 [2024-07-15 22:24:05.456766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.357 [2024-07-15 22:24:05.532704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.929 22:24:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:40.929 22:24:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:40.929 22:24:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:40.929 22:24:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:40.929 22:24:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.929 22:24:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.929 22:24:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.929 22:24:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:40.929 22:24:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.929 22:24:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.929 22:24:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.929 22:24:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:40.929 22:24:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.929 22:24:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.313 [2024-07-15 22:24:07.311436] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:42.313 [2024-07-15 22:24:07.311457] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:42.313 [2024-07-15 22:24:07.311470] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:42.313 [2024-07-15 22:24:07.440901] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:42.313 [2024-07-15 22:24:07.624894] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:42.313 [2024-07-15 22:24:07.624942] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:42.313 [2024-07-15 22:24:07.624966] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:42.313 [2024-07-15 22:24:07.624983] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:42.313 [2024-07-15 22:24:07.625003] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:42.313 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.313 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:42.313 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.313 [2024-07-15 22:24:07.629183] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18637b0 was disconnected and freed. delete nvme_qpair. 00:26:42.313 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.313 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.313 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.313 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.313 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.313 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.574 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.574 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:42.574 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:42.574 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:42.574 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:42.574 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.574 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.574 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.574 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.574 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.574 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.574 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.574 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.574 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:42.574 22:24:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.958 22:24:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.958 22:24:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.958 22:24:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.958 22:24:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.958 22:24:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.958 22:24:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.958 22:24:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.958 22:24:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.958 22:24:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:43.958 22:24:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:44.898 22:24:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:44.898 22:24:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.898 22:24:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:44.898 22:24:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.898 22:24:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:44.898 22:24:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.898 22:24:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.898 22:24:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.898 22:24:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:44.898 22:24:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:45.870 22:24:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.870 22:24:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.870 22:24:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.870 22:24:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.870 22:24:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.870 22:24:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.870 22:24:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.870 22:24:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.870 22:24:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:45.870 22:24:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:46.808 22:24:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.808 22:24:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.808 22:24:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.808 22:24:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.808 22:24:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.808 22:24:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.808 22:24:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:46.808 22:24:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.808 22:24:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:46.808 22:24:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:47.749 [2024-07-15 22:24:13.065398] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:47.749 [2024-07-15 22:24:13.065439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.749 [2024-07-15 22:24:13.065451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.749 [2024-07-15 22:24:13.065461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.749 [2024-07-15 22:24:13.065468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.749 [2024-07-15 22:24:13.065476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.749 [2024-07-15 22:24:13.065484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.749 [2024-07-15 22:24:13.065496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.749 [2024-07-15 22:24:13.065504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.749 [2024-07-15 22:24:13.065513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.749 [2024-07-15 22:24:13.065520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.749 [2024-07-15 22:24:13.065527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182a040 is same with the state(5) to be set 00:26:48.010 [2024-07-15 22:24:13.075411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182a040 (9): Bad file descriptor 00:26:48.010 22:24:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:48.010 22:24:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.010 22:24:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:48.010 22:24:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.010 22:24:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:48.010 22:24:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.010 22:24:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:48.010 [2024-07-15 22:24:13.085450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:48.951 [2024-07-15 22:24:14.089147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:48.951 [2024-07-15 22:24:14.089186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182a040 with addr=10.0.0.2, port=4420 00:26:48.951 [2024-07-15 22:24:14.089198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182a040 is same with the state(5) to be set 00:26:48.951 [2024-07-15 22:24:14.089234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182a040 (9): Bad file descriptor 00:26:48.951 [2024-07-15 22:24:14.089589] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:48.951 [2024-07-15 22:24:14.089608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:48.951 [2024-07-15 22:24:14.089615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:48.951 [2024-07-15 22:24:14.089624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:48.951 [2024-07-15 22:24:14.089639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.951 [2024-07-15 22:24:14.089648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:48.951 22:24:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.951 22:24:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:48.951 22:24:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:49.897 [2024-07-15 22:24:15.092025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:49.897 [2024-07-15 22:24:15.092044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:49.897 [2024-07-15 22:24:15.092051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:49.897 [2024-07-15 22:24:15.092058] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:49.897 [2024-07-15 22:24:15.092070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.897 [2024-07-15 22:24:15.092088] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:49.897 [2024-07-15 22:24:15.092112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.897 [2024-07-15 22:24:15.092126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.897 [2024-07-15 22:24:15.092136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.897 [2024-07-15 22:24:15.092143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.897 [2024-07-15 22:24:15.092151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.897 [2024-07-15 22:24:15.092158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.897 [2024-07-15 22:24:15.092166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.897 [2024-07-15 22:24:15.092173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.897 [2024-07-15 22:24:15.092182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.897 [2024-07-15 22:24:15.092190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.897 [2024-07-15 22:24:15.092197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:49.897 [2024-07-15 22:24:15.092555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18294c0 (9): Bad file descriptor 00:26:49.897 [2024-07-15 22:24:15.093565] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:49.897 [2024-07-15 22:24:15.093575] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:49.897 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.897 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.897 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.897 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.897 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.897 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.897 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.897 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.897 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:49.897 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.897 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.158 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:50.158 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:50.158 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:50.158 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.158 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:50.158 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.158 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:50.158 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.158 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.158 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:50.158 22:24:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:51.098 22:24:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:51.098 22:24:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.098 22:24:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:51.098 22:24:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.098 22:24:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:51.098 22:24:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.098 22:24:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:51.098 22:24:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.098 22:24:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:51.098 22:24:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:52.038 [2024-07-15 22:24:17.145315] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:52.039 [2024-07-15 22:24:17.145332] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:52.039 [2024-07-15 22:24:17.145346] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:52.039 [2024-07-15 22:24:17.234629] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:52.299 22:24:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.299 22:24:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.299 22:24:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.299 22:24:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.299 22:24:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.299 22:24:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.299 22:24:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.299 22:24:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.299 [2024-07-15 22:24:17.416635] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:52.299 [2024-07-15 22:24:17.416672] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:52.299 [2024-07-15 22:24:17.416692] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:52.299 [2024-07-15 22:24:17.416705] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:52.299 [2024-07-15 22:24:17.416713] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:52.299 [2024-07-15 22:24:17.423188] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1840310 was disconnected and freed. delete nvme_qpair. 00:26:52.299 22:24:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:52.299 22:24:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2913030 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2913030 ']' 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2913030 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2913030 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2913030' 00:26:53.241 killing process with pid 2913030 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2913030 00:26:53.241 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2913030 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:53.502 rmmod nvme_tcp 00:26:53.502 rmmod nvme_fabrics 00:26:53.502 rmmod nvme_keyring 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2912902 ']' 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2912902 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2912902 ']' 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2912902 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2912902 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2912902' 00:26:53.502 killing process with pid 2912902 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2912902 00:26:53.502 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2912902 00:26:53.763 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:53.763 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:53.763 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:53.763 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:53.763 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:53.763 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.763 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:53.763 22:24:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.673 22:24:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:55.673 00:26:55.673 real 0m24.010s 00:26:55.673 user 0m29.510s 00:26:55.673 sys 0m6.607s 00:26:55.673 22:24:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:55.673 22:24:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.673 ************************************ 00:26:55.673 END TEST nvmf_discovery_remove_ifc 00:26:55.673 ************************************ 00:26:55.934 22:24:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:55.934 22:24:21 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:55.934 22:24:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:55.934 22:24:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.934 22:24:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:55.934 ************************************ 00:26:55.934 START TEST nvmf_identify_kernel_target 00:26:55.934 ************************************ 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:55.934 * Looking for test storage... 00:26:55.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:55.934 22:24:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:04.130 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:04.130 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:04.130 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:04.130 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:04.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:27:04.130 00:27:04.130 --- 10.0.0.2 ping statistics --- 00:27:04.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.130 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.428 ms 00:27:04.130 00:27:04.130 --- 10.0.0.1 ping statistics --- 00:27:04.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.130 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:04.130 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:04.131 22:24:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:06.676 Waiting for block devices as requested 00:27:06.676 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:06.676 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:06.676 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:06.937 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:06.937 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:06.937 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:07.198 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:07.198 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:07.198 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:07.459 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:07.459 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:07.459 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:07.720 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:07.720 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:07.720 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:07.720 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:07.981 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:08.243 No valid GPT data, bailing 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:08.243 00:27:08.243 Discovery Log Number of Records 2, Generation counter 2 00:27:08.243 =====Discovery Log Entry 0====== 00:27:08.243 trtype: tcp 00:27:08.243 adrfam: ipv4 00:27:08.243 subtype: current discovery subsystem 00:27:08.243 treq: not specified, sq flow control disable supported 00:27:08.243 portid: 1 00:27:08.243 trsvcid: 4420 00:27:08.243 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:08.243 traddr: 10.0.0.1 00:27:08.243 eflags: none 00:27:08.243 sectype: none 00:27:08.243 =====Discovery Log Entry 1====== 00:27:08.243 trtype: tcp 00:27:08.243 adrfam: ipv4 00:27:08.243 subtype: nvme subsystem 00:27:08.243 treq: not specified, sq flow control disable supported 00:27:08.243 portid: 1 00:27:08.243 trsvcid: 4420 00:27:08.243 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:08.243 traddr: 10.0.0.1 00:27:08.243 eflags: none 00:27:08.243 sectype: none 00:27:08.243 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:08.243 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:08.504 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.504 ===================================================== 00:27:08.504 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:08.504 ===================================================== 00:27:08.504 Controller Capabilities/Features 00:27:08.504 ================================ 00:27:08.504 Vendor ID: 0000 00:27:08.504 Subsystem Vendor ID: 0000 00:27:08.504 Serial Number: a9210f1bbda6bdfefd1f 00:27:08.504 Model Number: Linux 00:27:08.504 Firmware Version: 6.7.0-68 00:27:08.504 Recommended Arb Burst: 0 00:27:08.504 IEEE OUI Identifier: 00 00 00 00:27:08.504 Multi-path I/O 00:27:08.504 May have multiple subsystem ports: No 00:27:08.504 May have multiple controllers: No 00:27:08.504 Associated with SR-IOV VF: No 00:27:08.504 Max Data Transfer Size: Unlimited 00:27:08.504 Max Number of Namespaces: 0 00:27:08.504 Max Number of I/O Queues: 1024 00:27:08.504 NVMe Specification Version (VS): 1.3 00:27:08.504 NVMe Specification Version (Identify): 1.3 00:27:08.504 Maximum Queue Entries: 1024 00:27:08.504 Contiguous Queues Required: No 00:27:08.504 Arbitration Mechanisms Supported 00:27:08.504 Weighted Round Robin: Not Supported 00:27:08.504 Vendor Specific: Not Supported 00:27:08.504 Reset Timeout: 7500 ms 00:27:08.504 Doorbell Stride: 4 bytes 00:27:08.504 NVM Subsystem Reset: Not Supported 00:27:08.504 Command Sets Supported 00:27:08.504 NVM Command Set: Supported 00:27:08.504 Boot Partition: Not Supported 00:27:08.504 Memory Page Size Minimum: 4096 bytes 00:27:08.504 Memory Page Size Maximum: 4096 bytes 00:27:08.504 Persistent Memory Region: Not Supported 00:27:08.504 Optional Asynchronous Events Supported 00:27:08.504 Namespace Attribute Notices: Not Supported 00:27:08.504 Firmware Activation Notices: Not Supported 00:27:08.504 ANA Change Notices: Not Supported 00:27:08.504 PLE Aggregate Log Change Notices: Not Supported 00:27:08.504 LBA Status Info Alert Notices: Not Supported 00:27:08.504 EGE Aggregate Log Change Notices: Not Supported 00:27:08.504 Normal NVM Subsystem Shutdown event: Not Supported 00:27:08.504 Zone Descriptor Change Notices: Not Supported 00:27:08.504 Discovery Log Change Notices: Supported 00:27:08.504 Controller Attributes 00:27:08.504 128-bit Host Identifier: Not Supported 00:27:08.504 Non-Operational Permissive Mode: Not Supported 00:27:08.504 NVM Sets: Not Supported 00:27:08.505 Read Recovery Levels: Not Supported 00:27:08.505 Endurance Groups: Not Supported 00:27:08.505 Predictable Latency Mode: Not Supported 00:27:08.505 Traffic Based Keep ALive: Not Supported 00:27:08.505 Namespace Granularity: Not Supported 00:27:08.505 SQ Associations: Not Supported 00:27:08.505 UUID List: Not Supported 00:27:08.505 Multi-Domain Subsystem: Not Supported 00:27:08.505 Fixed Capacity Management: Not Supported 00:27:08.505 Variable Capacity Management: Not Supported 00:27:08.505 Delete Endurance Group: Not Supported 00:27:08.505 Delete NVM Set: Not Supported 00:27:08.505 Extended LBA Formats Supported: Not Supported 00:27:08.505 Flexible Data Placement Supported: Not Supported 00:27:08.505 00:27:08.505 Controller Memory Buffer Support 00:27:08.505 ================================ 00:27:08.505 Supported: No 00:27:08.505 00:27:08.505 Persistent Memory Region Support 00:27:08.505 ================================ 00:27:08.505 Supported: No 00:27:08.505 00:27:08.505 Admin Command Set Attributes 00:27:08.505 ============================ 00:27:08.505 Security Send/Receive: Not Supported 00:27:08.505 Format NVM: Not Supported 00:27:08.505 Firmware Activate/Download: Not Supported 00:27:08.505 Namespace Management: Not Supported 00:27:08.505 Device Self-Test: Not Supported 00:27:08.505 Directives: Not Supported 00:27:08.505 NVMe-MI: Not Supported 00:27:08.505 Virtualization Management: Not Supported 00:27:08.505 Doorbell Buffer Config: Not Supported 00:27:08.505 Get LBA Status Capability: Not Supported 00:27:08.505 Command & Feature Lockdown Capability: Not Supported 00:27:08.505 Abort Command Limit: 1 00:27:08.505 Async Event Request Limit: 1 00:27:08.505 Number of Firmware Slots: N/A 00:27:08.505 Firmware Slot 1 Read-Only: N/A 00:27:08.505 Firmware Activation Without Reset: N/A 00:27:08.505 Multiple Update Detection Support: N/A 00:27:08.505 Firmware Update Granularity: No Information Provided 00:27:08.505 Per-Namespace SMART Log: No 00:27:08.505 Asymmetric Namespace Access Log Page: Not Supported 00:27:08.505 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:08.505 Command Effects Log Page: Not Supported 00:27:08.505 Get Log Page Extended Data: Supported 00:27:08.505 Telemetry Log Pages: Not Supported 00:27:08.505 Persistent Event Log Pages: Not Supported 00:27:08.505 Supported Log Pages Log Page: May Support 00:27:08.505 Commands Supported & Effects Log Page: Not Supported 00:27:08.505 Feature Identifiers & Effects Log Page:May Support 00:27:08.505 NVMe-MI Commands & Effects Log Page: May Support 00:27:08.505 Data Area 4 for Telemetry Log: Not Supported 00:27:08.505 Error Log Page Entries Supported: 1 00:27:08.505 Keep Alive: Not Supported 00:27:08.505 00:27:08.505 NVM Command Set Attributes 00:27:08.505 ========================== 00:27:08.505 Submission Queue Entry Size 00:27:08.505 Max: 1 00:27:08.505 Min: 1 00:27:08.505 Completion Queue Entry Size 00:27:08.505 Max: 1 00:27:08.505 Min: 1 00:27:08.505 Number of Namespaces: 0 00:27:08.505 Compare Command: Not Supported 00:27:08.505 Write Uncorrectable Command: Not Supported 00:27:08.505 Dataset Management Command: Not Supported 00:27:08.505 Write Zeroes Command: Not Supported 00:27:08.505 Set Features Save Field: Not Supported 00:27:08.505 Reservations: Not Supported 00:27:08.505 Timestamp: Not Supported 00:27:08.505 Copy: Not Supported 00:27:08.505 Volatile Write Cache: Not Present 00:27:08.505 Atomic Write Unit (Normal): 1 00:27:08.505 Atomic Write Unit (PFail): 1 00:27:08.505 Atomic Compare & Write Unit: 1 00:27:08.505 Fused Compare & Write: Not Supported 00:27:08.505 Scatter-Gather List 00:27:08.505 SGL Command Set: Supported 00:27:08.505 SGL Keyed: Not Supported 00:27:08.505 SGL Bit Bucket Descriptor: Not Supported 00:27:08.505 SGL Metadata Pointer: Not Supported 00:27:08.505 Oversized SGL: Not Supported 00:27:08.505 SGL Metadata Address: Not Supported 00:27:08.505 SGL Offset: Supported 00:27:08.505 Transport SGL Data Block: Not Supported 00:27:08.505 Replay Protected Memory Block: Not Supported 00:27:08.505 00:27:08.505 Firmware Slot Information 00:27:08.505 ========================= 00:27:08.505 Active slot: 0 00:27:08.505 00:27:08.505 00:27:08.505 Error Log 00:27:08.505 ========= 00:27:08.505 00:27:08.505 Active Namespaces 00:27:08.505 ================= 00:27:08.505 Discovery Log Page 00:27:08.505 ================== 00:27:08.505 Generation Counter: 2 00:27:08.505 Number of Records: 2 00:27:08.505 Record Format: 0 00:27:08.505 00:27:08.505 Discovery Log Entry 0 00:27:08.505 ---------------------- 00:27:08.505 Transport Type: 3 (TCP) 00:27:08.505 Address Family: 1 (IPv4) 00:27:08.505 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:08.505 Entry Flags: 00:27:08.505 Duplicate Returned Information: 0 00:27:08.505 Explicit Persistent Connection Support for Discovery: 0 00:27:08.505 Transport Requirements: 00:27:08.505 Secure Channel: Not Specified 00:27:08.505 Port ID: 1 (0x0001) 00:27:08.505 Controller ID: 65535 (0xffff) 00:27:08.505 Admin Max SQ Size: 32 00:27:08.505 Transport Service Identifier: 4420 00:27:08.505 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:08.505 Transport Address: 10.0.0.1 00:27:08.505 Discovery Log Entry 1 00:27:08.505 ---------------------- 00:27:08.505 Transport Type: 3 (TCP) 00:27:08.505 Address Family: 1 (IPv4) 00:27:08.505 Subsystem Type: 2 (NVM Subsystem) 00:27:08.505 Entry Flags: 00:27:08.505 Duplicate Returned Information: 0 00:27:08.505 Explicit Persistent Connection Support for Discovery: 0 00:27:08.505 Transport Requirements: 00:27:08.505 Secure Channel: Not Specified 00:27:08.505 Port ID: 1 (0x0001) 00:27:08.505 Controller ID: 65535 (0xffff) 00:27:08.505 Admin Max SQ Size: 32 00:27:08.505 Transport Service Identifier: 4420 00:27:08.505 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:08.505 Transport Address: 10.0.0.1 00:27:08.505 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:08.505 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.505 get_feature(0x01) failed 00:27:08.505 get_feature(0x02) failed 00:27:08.505 get_feature(0x04) failed 00:27:08.505 ===================================================== 00:27:08.505 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:08.505 ===================================================== 00:27:08.505 Controller Capabilities/Features 00:27:08.505 ================================ 00:27:08.505 Vendor ID: 0000 00:27:08.505 Subsystem Vendor ID: 0000 00:27:08.505 Serial Number: f293239e156bf417ec0c 00:27:08.505 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:08.505 Firmware Version: 6.7.0-68 00:27:08.505 Recommended Arb Burst: 6 00:27:08.505 IEEE OUI Identifier: 00 00 00 00:27:08.505 Multi-path I/O 00:27:08.505 May have multiple subsystem ports: Yes 00:27:08.505 May have multiple controllers: Yes 00:27:08.505 Associated with SR-IOV VF: No 00:27:08.505 Max Data Transfer Size: Unlimited 00:27:08.505 Max Number of Namespaces: 1024 00:27:08.505 Max Number of I/O Queues: 128 00:27:08.505 NVMe Specification Version (VS): 1.3 00:27:08.505 NVMe Specification Version (Identify): 1.3 00:27:08.505 Maximum Queue Entries: 1024 00:27:08.505 Contiguous Queues Required: No 00:27:08.505 Arbitration Mechanisms Supported 00:27:08.505 Weighted Round Robin: Not Supported 00:27:08.505 Vendor Specific: Not Supported 00:27:08.505 Reset Timeout: 7500 ms 00:27:08.505 Doorbell Stride: 4 bytes 00:27:08.505 NVM Subsystem Reset: Not Supported 00:27:08.505 Command Sets Supported 00:27:08.505 NVM Command Set: Supported 00:27:08.505 Boot Partition: Not Supported 00:27:08.505 Memory Page Size Minimum: 4096 bytes 00:27:08.505 Memory Page Size Maximum: 4096 bytes 00:27:08.505 Persistent Memory Region: Not Supported 00:27:08.505 Optional Asynchronous Events Supported 00:27:08.505 Namespace Attribute Notices: Supported 00:27:08.505 Firmware Activation Notices: Not Supported 00:27:08.505 ANA Change Notices: Supported 00:27:08.505 PLE Aggregate Log Change Notices: Not Supported 00:27:08.505 LBA Status Info Alert Notices: Not Supported 00:27:08.505 EGE Aggregate Log Change Notices: Not Supported 00:27:08.505 Normal NVM Subsystem Shutdown event: Not Supported 00:27:08.505 Zone Descriptor Change Notices: Not Supported 00:27:08.505 Discovery Log Change Notices: Not Supported 00:27:08.505 Controller Attributes 00:27:08.505 128-bit Host Identifier: Supported 00:27:08.505 Non-Operational Permissive Mode: Not Supported 00:27:08.505 NVM Sets: Not Supported 00:27:08.505 Read Recovery Levels: Not Supported 00:27:08.505 Endurance Groups: Not Supported 00:27:08.505 Predictable Latency Mode: Not Supported 00:27:08.505 Traffic Based Keep ALive: Supported 00:27:08.505 Namespace Granularity: Not Supported 00:27:08.505 SQ Associations: Not Supported 00:27:08.505 UUID List: Not Supported 00:27:08.505 Multi-Domain Subsystem: Not Supported 00:27:08.505 Fixed Capacity Management: Not Supported 00:27:08.506 Variable Capacity Management: Not Supported 00:27:08.506 Delete Endurance Group: Not Supported 00:27:08.506 Delete NVM Set: Not Supported 00:27:08.506 Extended LBA Formats Supported: Not Supported 00:27:08.506 Flexible Data Placement Supported: Not Supported 00:27:08.506 00:27:08.506 Controller Memory Buffer Support 00:27:08.506 ================================ 00:27:08.506 Supported: No 00:27:08.506 00:27:08.506 Persistent Memory Region Support 00:27:08.506 ================================ 00:27:08.506 Supported: No 00:27:08.506 00:27:08.506 Admin Command Set Attributes 00:27:08.506 ============================ 00:27:08.506 Security Send/Receive: Not Supported 00:27:08.506 Format NVM: Not Supported 00:27:08.506 Firmware Activate/Download: Not Supported 00:27:08.506 Namespace Management: Not Supported 00:27:08.506 Device Self-Test: Not Supported 00:27:08.506 Directives: Not Supported 00:27:08.506 NVMe-MI: Not Supported 00:27:08.506 Virtualization Management: Not Supported 00:27:08.506 Doorbell Buffer Config: Not Supported 00:27:08.506 Get LBA Status Capability: Not Supported 00:27:08.506 Command & Feature Lockdown Capability: Not Supported 00:27:08.506 Abort Command Limit: 4 00:27:08.506 Async Event Request Limit: 4 00:27:08.506 Number of Firmware Slots: N/A 00:27:08.506 Firmware Slot 1 Read-Only: N/A 00:27:08.506 Firmware Activation Without Reset: N/A 00:27:08.506 Multiple Update Detection Support: N/A 00:27:08.506 Firmware Update Granularity: No Information Provided 00:27:08.506 Per-Namespace SMART Log: Yes 00:27:08.506 Asymmetric Namespace Access Log Page: Supported 00:27:08.506 ANA Transition Time : 10 sec 00:27:08.506 00:27:08.506 Asymmetric Namespace Access Capabilities 00:27:08.506 ANA Optimized State : Supported 00:27:08.506 ANA Non-Optimized State : Supported 00:27:08.506 ANA Inaccessible State : Supported 00:27:08.506 ANA Persistent Loss State : Supported 00:27:08.506 ANA Change State : Supported 00:27:08.506 ANAGRPID is not changed : No 00:27:08.506 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:08.506 00:27:08.506 ANA Group Identifier Maximum : 128 00:27:08.506 Number of ANA Group Identifiers : 128 00:27:08.506 Max Number of Allowed Namespaces : 1024 00:27:08.506 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:08.506 Command Effects Log Page: Supported 00:27:08.506 Get Log Page Extended Data: Supported 00:27:08.506 Telemetry Log Pages: Not Supported 00:27:08.506 Persistent Event Log Pages: Not Supported 00:27:08.506 Supported Log Pages Log Page: May Support 00:27:08.506 Commands Supported & Effects Log Page: Not Supported 00:27:08.506 Feature Identifiers & Effects Log Page:May Support 00:27:08.506 NVMe-MI Commands & Effects Log Page: May Support 00:27:08.506 Data Area 4 for Telemetry Log: Not Supported 00:27:08.506 Error Log Page Entries Supported: 128 00:27:08.506 Keep Alive: Supported 00:27:08.506 Keep Alive Granularity: 1000 ms 00:27:08.506 00:27:08.506 NVM Command Set Attributes 00:27:08.506 ========================== 00:27:08.506 Submission Queue Entry Size 00:27:08.506 Max: 64 00:27:08.506 Min: 64 00:27:08.506 Completion Queue Entry Size 00:27:08.506 Max: 16 00:27:08.506 Min: 16 00:27:08.506 Number of Namespaces: 1024 00:27:08.506 Compare Command: Not Supported 00:27:08.506 Write Uncorrectable Command: Not Supported 00:27:08.506 Dataset Management Command: Supported 00:27:08.506 Write Zeroes Command: Supported 00:27:08.506 Set Features Save Field: Not Supported 00:27:08.506 Reservations: Not Supported 00:27:08.506 Timestamp: Not Supported 00:27:08.506 Copy: Not Supported 00:27:08.506 Volatile Write Cache: Present 00:27:08.506 Atomic Write Unit (Normal): 1 00:27:08.506 Atomic Write Unit (PFail): 1 00:27:08.506 Atomic Compare & Write Unit: 1 00:27:08.506 Fused Compare & Write: Not Supported 00:27:08.506 Scatter-Gather List 00:27:08.506 SGL Command Set: Supported 00:27:08.506 SGL Keyed: Not Supported 00:27:08.506 SGL Bit Bucket Descriptor: Not Supported 00:27:08.506 SGL Metadata Pointer: Not Supported 00:27:08.506 Oversized SGL: Not Supported 00:27:08.506 SGL Metadata Address: Not Supported 00:27:08.506 SGL Offset: Supported 00:27:08.506 Transport SGL Data Block: Not Supported 00:27:08.506 Replay Protected Memory Block: Not Supported 00:27:08.506 00:27:08.506 Firmware Slot Information 00:27:08.506 ========================= 00:27:08.506 Active slot: 0 00:27:08.506 00:27:08.506 Asymmetric Namespace Access 00:27:08.506 =========================== 00:27:08.506 Change Count : 0 00:27:08.506 Number of ANA Group Descriptors : 1 00:27:08.506 ANA Group Descriptor : 0 00:27:08.506 ANA Group ID : 1 00:27:08.506 Number of NSID Values : 1 00:27:08.506 Change Count : 0 00:27:08.506 ANA State : 1 00:27:08.506 Namespace Identifier : 1 00:27:08.506 00:27:08.506 Commands Supported and Effects 00:27:08.506 ============================== 00:27:08.506 Admin Commands 00:27:08.506 -------------- 00:27:08.506 Get Log Page (02h): Supported 00:27:08.506 Identify (06h): Supported 00:27:08.506 Abort (08h): Supported 00:27:08.506 Set Features (09h): Supported 00:27:08.506 Get Features (0Ah): Supported 00:27:08.506 Asynchronous Event Request (0Ch): Supported 00:27:08.506 Keep Alive (18h): Supported 00:27:08.506 I/O Commands 00:27:08.506 ------------ 00:27:08.506 Flush (00h): Supported 00:27:08.506 Write (01h): Supported LBA-Change 00:27:08.506 Read (02h): Supported 00:27:08.506 Write Zeroes (08h): Supported LBA-Change 00:27:08.506 Dataset Management (09h): Supported 00:27:08.506 00:27:08.506 Error Log 00:27:08.506 ========= 00:27:08.506 Entry: 0 00:27:08.506 Error Count: 0x3 00:27:08.506 Submission Queue Id: 0x0 00:27:08.506 Command Id: 0x5 00:27:08.506 Phase Bit: 0 00:27:08.506 Status Code: 0x2 00:27:08.506 Status Code Type: 0x0 00:27:08.506 Do Not Retry: 1 00:27:08.506 Error Location: 0x28 00:27:08.506 LBA: 0x0 00:27:08.506 Namespace: 0x0 00:27:08.506 Vendor Log Page: 0x0 00:27:08.506 ----------- 00:27:08.506 Entry: 1 00:27:08.506 Error Count: 0x2 00:27:08.506 Submission Queue Id: 0x0 00:27:08.506 Command Id: 0x5 00:27:08.506 Phase Bit: 0 00:27:08.506 Status Code: 0x2 00:27:08.506 Status Code Type: 0x0 00:27:08.506 Do Not Retry: 1 00:27:08.506 Error Location: 0x28 00:27:08.506 LBA: 0x0 00:27:08.506 Namespace: 0x0 00:27:08.506 Vendor Log Page: 0x0 00:27:08.506 ----------- 00:27:08.506 Entry: 2 00:27:08.506 Error Count: 0x1 00:27:08.506 Submission Queue Id: 0x0 00:27:08.506 Command Id: 0x4 00:27:08.506 Phase Bit: 0 00:27:08.506 Status Code: 0x2 00:27:08.506 Status Code Type: 0x0 00:27:08.506 Do Not Retry: 1 00:27:08.506 Error Location: 0x28 00:27:08.506 LBA: 0x0 00:27:08.506 Namespace: 0x0 00:27:08.506 Vendor Log Page: 0x0 00:27:08.506 00:27:08.506 Number of Queues 00:27:08.506 ================ 00:27:08.506 Number of I/O Submission Queues: 128 00:27:08.506 Number of I/O Completion Queues: 128 00:27:08.506 00:27:08.506 ZNS Specific Controller Data 00:27:08.506 ============================ 00:27:08.506 Zone Append Size Limit: 0 00:27:08.506 00:27:08.506 00:27:08.506 Active Namespaces 00:27:08.506 ================= 00:27:08.506 get_feature(0x05) failed 00:27:08.506 Namespace ID:1 00:27:08.506 Command Set Identifier: NVM (00h) 00:27:08.506 Deallocate: Supported 00:27:08.506 Deallocated/Unwritten Error: Not Supported 00:27:08.506 Deallocated Read Value: Unknown 00:27:08.506 Deallocate in Write Zeroes: Not Supported 00:27:08.506 Deallocated Guard Field: 0xFFFF 00:27:08.506 Flush: Supported 00:27:08.506 Reservation: Not Supported 00:27:08.506 Namespace Sharing Capabilities: Multiple Controllers 00:27:08.506 Size (in LBAs): 3750748848 (1788GiB) 00:27:08.506 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:08.506 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:08.506 UUID: 395c87b0-5f76-465f-b287-d3ea77e51a5c 00:27:08.506 Thin Provisioning: Not Supported 00:27:08.506 Per-NS Atomic Units: Yes 00:27:08.506 Atomic Write Unit (Normal): 8 00:27:08.506 Atomic Write Unit (PFail): 8 00:27:08.506 Preferred Write Granularity: 8 00:27:08.506 Atomic Compare & Write Unit: 8 00:27:08.506 Atomic Boundary Size (Normal): 0 00:27:08.506 Atomic Boundary Size (PFail): 0 00:27:08.506 Atomic Boundary Offset: 0 00:27:08.506 NGUID/EUI64 Never Reused: No 00:27:08.506 ANA group ID: 1 00:27:08.506 Namespace Write Protected: No 00:27:08.506 Number of LBA Formats: 1 00:27:08.506 Current LBA Format: LBA Format #00 00:27:08.506 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:08.506 00:27:08.506 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:08.506 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:08.506 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:08.507 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:08.507 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:08.507 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:08.507 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:08.507 rmmod nvme_tcp 00:27:08.507 rmmod nvme_fabrics 00:27:08.507 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:08.507 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:08.507 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:08.507 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:08.507 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:08.507 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:08.507 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:08.507 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:08.507 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:08.507 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.507 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:08.507 22:24:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.053 22:24:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:11.053 22:24:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:11.053 22:24:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:11.053 22:24:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:11.053 22:24:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:11.053 22:24:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:11.053 22:24:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:11.053 22:24:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:11.053 22:24:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:11.053 22:24:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:11.053 22:24:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:13.602 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:13.602 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:13.602 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:13.602 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:13.602 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:13.602 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:13.602 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:13.602 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:13.602 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:13.602 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:13.602 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:13.602 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:13.602 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:13.602 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:13.863 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:13.863 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:13.863 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:14.124 00:27:14.124 real 0m18.253s 00:27:14.124 user 0m4.731s 00:27:14.124 sys 0m10.452s 00:27:14.124 22:24:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:14.124 22:24:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:14.124 ************************************ 00:27:14.124 END TEST nvmf_identify_kernel_target 00:27:14.124 ************************************ 00:27:14.124 22:24:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:14.124 22:24:39 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:14.124 22:24:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:14.124 22:24:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.124 22:24:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.124 ************************************ 00:27:14.124 START TEST nvmf_auth_host 00:27:14.124 ************************************ 00:27:14.124 22:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:14.385 * Looking for test storage... 00:27:14.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:14.385 22:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.385 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:14.386 22:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:20.975 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:20.976 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:20.976 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:20.976 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:20.976 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.976 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:21.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:27:21.237 00:27:21.237 --- 10.0.0.2 ping statistics --- 00:27:21.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.237 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:27:21.237 00:27:21.237 --- 10.0.0.1 ping statistics --- 00:27:21.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.237 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2927729 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2927729 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2927729 ']' 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:21.237 22:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=71e5fd0e60a7c183ac039c27c4dca402 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.AUx 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 71e5fd0e60a7c183ac039c27c4dca402 0 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 71e5fd0e60a7c183ac039c27c4dca402 0 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=71e5fd0e60a7c183ac039c27c4dca402 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:22.179 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.AUx 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.AUx 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.AUx 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e58e37c64764ebe5dd2b3f9e6e4d047ead7313fc540ba1409b5933009d70ab5f 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.HGb 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e58e37c64764ebe5dd2b3f9e6e4d047ead7313fc540ba1409b5933009d70ab5f 3 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e58e37c64764ebe5dd2b3f9e6e4d047ead7313fc540ba1409b5933009d70ab5f 3 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e58e37c64764ebe5dd2b3f9e6e4d047ead7313fc540ba1409b5933009d70ab5f 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:22.180 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.HGb 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.HGb 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.HGb 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6433e889719200e95da58e2060342b23d73f4eeef2956fe0 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.h7Q 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6433e889719200e95da58e2060342b23d73f4eeef2956fe0 0 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6433e889719200e95da58e2060342b23d73f4eeef2956fe0 0 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6433e889719200e95da58e2060342b23d73f4eeef2956fe0 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.h7Q 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.h7Q 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.h7Q 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eb11d8603d84474630f8e81942620582012c0262f0dce7df 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.TxX 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eb11d8603d84474630f8e81942620582012c0262f0dce7df 2 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eb11d8603d84474630f8e81942620582012c0262f0dce7df 2 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eb11d8603d84474630f8e81942620582012c0262f0dce7df 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.TxX 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.TxX 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.TxX 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6eabb74a2243eb0c69efe4765f77f8d4 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.305 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6eabb74a2243eb0c69efe4765f77f8d4 1 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6eabb74a2243eb0c69efe4765f77f8d4 1 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6eabb74a2243eb0c69efe4765f77f8d4 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.305 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.305 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.305 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c5d7c0832b9d9c09524dd92f210bc493 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.uiw 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c5d7c0832b9d9c09524dd92f210bc493 1 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c5d7c0832b9d9c09524dd92f210bc493 1 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c5d7c0832b9d9c09524dd92f210bc493 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:22.441 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.uiw 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.uiw 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.uiw 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3d84adcb776e8c8da710ab96702cacadb98dc2f6c5fc0a87 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.45P 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3d84adcb776e8c8da710ab96702cacadb98dc2f6c5fc0a87 2 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3d84adcb776e8c8da710ab96702cacadb98dc2f6c5fc0a87 2 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3d84adcb776e8c8da710ab96702cacadb98dc2f6c5fc0a87 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.45P 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.45P 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.45P 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:22.702 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2e3124f7b202d5a53a77b0ece37ee277 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.q6n 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2e3124f7b202d5a53a77b0ece37ee277 0 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2e3124f7b202d5a53a77b0ece37ee277 0 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2e3124f7b202d5a53a77b0ece37ee277 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.q6n 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.q6n 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.q6n 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9458b6c5f1b3ae4ffbe45bdd0c7c2fc778114f8c566d86e904c7575abfbe47e3 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.mIT 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9458b6c5f1b3ae4ffbe45bdd0c7c2fc778114f8c566d86e904c7575abfbe47e3 3 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9458b6c5f1b3ae4ffbe45bdd0c7c2fc778114f8c566d86e904c7575abfbe47e3 3 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9458b6c5f1b3ae4ffbe45bdd0c7c2fc778114f8c566d86e904c7575abfbe47e3 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.mIT 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.mIT 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.mIT 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2927729 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2927729 ']' 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:22.703 22:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.AUx 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.HGb ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.HGb 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.h7Q 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.TxX ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.TxX 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.305 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.uiw ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uiw 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.45P 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.q6n ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.q6n 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.mIT 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:22.964 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:23.225 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:23.225 22:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:26.548 Waiting for block devices as requested 00:27:26.548 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:26.548 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:26.548 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:26.548 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:26.548 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:26.548 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:26.549 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:26.809 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:26.809 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:27.069 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:27.069 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:27.069 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:27.069 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:27.332 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:27.332 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:27.332 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:27.636 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:28.575 No valid GPT data, bailing 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:28.575 00:27:28.575 Discovery Log Number of Records 2, Generation counter 2 00:27:28.575 =====Discovery Log Entry 0====== 00:27:28.575 trtype: tcp 00:27:28.575 adrfam: ipv4 00:27:28.575 subtype: current discovery subsystem 00:27:28.575 treq: not specified, sq flow control disable supported 00:27:28.575 portid: 1 00:27:28.575 trsvcid: 4420 00:27:28.575 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:28.575 traddr: 10.0.0.1 00:27:28.575 eflags: none 00:27:28.575 sectype: none 00:27:28.575 =====Discovery Log Entry 1====== 00:27:28.575 trtype: tcp 00:27:28.575 adrfam: ipv4 00:27:28.575 subtype: nvme subsystem 00:27:28.575 treq: not specified, sq flow control disable supported 00:27:28.575 portid: 1 00:27:28.575 trsvcid: 4420 00:27:28.575 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:28.575 traddr: 10.0.0.1 00:27:28.575 eflags: none 00:27:28.575 sectype: none 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.575 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.576 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.576 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.576 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.576 22:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.576 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:28.576 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.576 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.836 nvme0n1 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: ]] 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.836 22:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.836 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.836 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.836 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.836 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.836 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.836 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.836 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.836 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.836 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.836 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.836 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.836 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.836 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.836 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.836 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.107 nvme0n1 00:27:29.107 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.107 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.107 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.107 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.107 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.107 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.108 nvme0n1 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.108 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: ]] 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.368 nvme0n1 00:27:29.368 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.369 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.369 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.369 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.369 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.369 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.369 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.369 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.369 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.369 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: ]] 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.629 nvme0n1 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.629 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.630 22:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.890 nvme0n1 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: ]] 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.890 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.151 nvme0n1 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.151 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.411 nvme0n1 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: ]] 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.411 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.412 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.672 nvme0n1 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.672 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: ]] 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.673 22:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.933 nvme0n1 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.933 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.934 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.934 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.934 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.934 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.934 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.934 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.934 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.934 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.934 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.934 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.934 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.934 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.934 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.934 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.934 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.194 nvme0n1 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: ]] 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.194 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.195 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.195 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.195 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.195 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.195 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.195 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.195 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.195 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.195 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.195 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.195 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.195 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.195 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.195 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.195 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.455 nvme0n1 00:27:31.455 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.455 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.455 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.455 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.455 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.455 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.455 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.455 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.455 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.455 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.715 22:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.976 nvme0n1 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: ]] 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.976 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.237 nvme0n1 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: ]] 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.237 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.497 nvme0n1 00:27:32.497 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.497 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.497 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.497 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.497 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.497 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.497 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.497 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.497 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.497 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.757 22:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.758 22:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.758 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.758 22:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.018 nvme0n1 00:27:33.018 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.018 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.018 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.018 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.018 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.018 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.018 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.018 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: ]] 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.019 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.590 nvme0n1 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.590 22:24:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.851 nvme0n1 00:27:33.851 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.851 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.851 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.851 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.851 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.851 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.111 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.111 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.111 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.111 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.111 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.111 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.111 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:34.111 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.111 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.111 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.111 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.111 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:34.111 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:34.111 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: ]] 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.112 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.682 nvme0n1 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: ]] 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.682 22:24:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.943 nvme0n1 00:27:34.943 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.943 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.943 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.943 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.943 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.943 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.203 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.203 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.203 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.203 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.203 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.203 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.203 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:35.203 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.203 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.203 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.203 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:35.203 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.204 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.464 nvme0n1 00:27:35.464 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.464 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.464 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.464 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.464 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.464 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: ]] 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.725 22:25:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.296 nvme0n1 00:27:36.296 22:25:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.296 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.296 22:25:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.296 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.296 22:25:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.296 22:25:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.557 22:25:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.558 22:25:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.558 22:25:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.129 nvme0n1 00:27:37.129 22:25:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.129 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.129 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.129 22:25:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.129 22:25:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.129 22:25:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: ]] 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.391 22:25:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.961 nvme0n1 00:27:37.961 22:25:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.961 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.961 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.961 22:25:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.961 22:25:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.961 22:25:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: ]] 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.221 22:25:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.790 nvme0n1 00:27:38.790 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.790 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.790 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.790 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.790 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.790 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.051 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.620 nvme0n1 00:27:39.620 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.620 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.620 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.620 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.620 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.620 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: ]] 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.880 22:25:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.880 nvme0n1 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.880 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.141 nvme0n1 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.141 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: ]] 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.142 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.402 nvme0n1 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: ]] 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.402 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.403 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.403 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.403 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.403 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.403 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.403 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.403 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.403 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.403 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.403 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.403 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.403 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:40.403 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.403 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.663 nvme0n1 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.663 22:25:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.923 nvme0n1 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: ]] 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.923 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.924 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.924 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.924 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.924 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.924 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.924 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.924 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.924 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.924 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.924 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.924 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.924 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:40.924 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.924 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.184 nvme0n1 00:27:41.184 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.185 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.445 nvme0n1 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: ]] 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.445 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.446 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.446 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.446 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.446 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.446 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.446 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.446 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.705 nvme0n1 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: ]] 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.705 22:25:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.965 nvme0n1 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.965 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.966 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.966 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.966 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.966 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.966 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.966 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.966 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.226 nvme0n1 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: ]] 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.226 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.486 nvme0n1 00:27:42.486 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.486 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.486 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.486 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.486 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.486 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.746 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.747 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.747 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.747 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.747 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.747 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.747 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.747 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.747 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.747 22:25:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.747 22:25:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.747 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.747 22:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.007 nvme0n1 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: ]] 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.007 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.008 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.008 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.008 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.008 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.008 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.008 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.008 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.008 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.008 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.008 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.008 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.008 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.008 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.269 nvme0n1 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: ]] 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.269 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.906 nvme0n1 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.906 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.907 22:25:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.174 nvme0n1 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: ]] 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.174 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.745 nvme0n1 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.745 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.746 22:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.006 nvme0n1 00:27:45.006 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.006 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.006 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.006 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.006 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.006 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: ]] 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.268 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.529 nvme0n1 00:27:45.529 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.529 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.529 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.529 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.529 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.529 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: ]] 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.790 22:25:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.361 nvme0n1 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.361 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.622 nvme0n1 00:27:46.622 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.883 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.883 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.883 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.883 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.883 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.883 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.883 22:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.883 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.883 22:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: ]] 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.883 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.825 nvme0n1 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.825 22:25:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.394 nvme0n1 00:27:48.394 22:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.394 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.394 22:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.394 22:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.394 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.394 22:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.394 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.394 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.394 22:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: ]] 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.395 22:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.336 nvme0n1 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: ]] 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.336 22:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.279 nvme0n1 00:27:50.279 22:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.280 22:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.850 nvme0n1 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: ]] 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.850 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.110 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.110 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.110 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.110 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.110 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.111 nvme0n1 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.111 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.372 nvme0n1 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: ]] 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.372 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.633 nvme0n1 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: ]] 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.633 22:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.894 nvme0n1 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.894 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.155 nvme0n1 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: ]] 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.155 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.417 nvme0n1 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.417 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.678 nvme0n1 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: ]] 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.678 22:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.938 nvme0n1 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: ]] 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.938 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.198 nvme0n1 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.198 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.457 nvme0n1 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: ]] 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.457 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.458 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.717 nvme0n1 00:27:53.717 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.717 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.717 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.717 22:25:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.717 22:25:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.717 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.717 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.717 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.717 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.717 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.977 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.239 nvme0n1 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: ]] 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.239 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.240 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.500 nvme0n1 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: ]] 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.500 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.501 22:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.761 nvme0n1 00:27:54.761 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.761 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.761 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.761 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.761 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.761 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.022 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.023 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.023 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.023 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.023 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.023 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.023 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.023 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.023 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.023 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.023 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.023 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.023 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.023 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.023 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.023 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.283 nvme0n1 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: ]] 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.283 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.854 nvme0n1 00:27:55.854 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.854 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.854 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.854 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.854 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.854 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.854 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.854 22:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.854 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.854 22:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.854 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.426 nvme0n1 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: ]] 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.427 22:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.687 nvme0n1 00:27:56.687 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.687 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: ]] 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.947 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.517 nvme0n1 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.517 22:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.778 nvme0n1 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFlNWZkMGU2MGE3YzE4M2FjMDM5YzI3YzRkY2E0MDIXAFaV: 00:27:57.778 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: ]] 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU4ZTM3YzY0NzY0ZWJlNWRkMmIzZjllNmU0ZDA0N2VhZDczMTNmYzU0MGJhMTQwOWI1OTMzMDA5ZDcwYWI1ZtGGAwc=: 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.038 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.610 nvme0n1 00:27:58.610 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.610 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.610 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.610 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.610 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.610 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.610 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.610 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.610 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.610 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.883 22:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.466 nvme0n1 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVhYmI3NGEyMjQzZWIwYzY5ZWZlNDc2NWY3N2Y4ZDSZDBp+: 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: ]] 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkN2MwODMyYjlkOWMwOTUyNGRkOTJmMjEwYmM0OTMqm/lv: 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.466 22:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.726 22:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.726 22:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.726 22:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.295 nvme0n1 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4NGFkY2I3NzZlOGM4ZGE3MTBhYjk2NzAyY2FjYWRiOThkYzJmNmM1ZmMwYTg3ZwiJXg==: 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: ]] 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUzMTI0ZjdiMjAyZDVhNTNhNzdiMGVjZTM3ZWUyNzdv/eoY: 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.295 22:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:00.554 22:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.554 22:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.170 nvme0n1 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTQ1OGI2YzVmMWIzYWU0ZmZiZTQ1YmRkMGM3YzJmYzc3ODExNGY4YzU2NmQ4NmU5MDRjNzU3NWFiZmJlNDdlM5rmE/U=: 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.170 22:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.171 22:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.171 22:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.171 22:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.171 22:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.171 22:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.171 22:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.171 22:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.171 22:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.171 22:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.171 22:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:01.171 22:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.171 22:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.109 nvme0n1 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzM2U4ODk3MTkyMDBlOTVkYTU4ZTIwNjAzNDJiMjNkNzNmNGVlZWYyOTU2ZmUwcUaQtA==: 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: ]] 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWIxMWQ4NjAzZDg0NDc0NjMwZjhlODE5NDI2MjA1ODIwMTJjMDI2MmYwZGNlN2RmAr8Qqw==: 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.109 request: 00:28:02.109 { 00:28:02.109 "name": "nvme0", 00:28:02.109 "trtype": "tcp", 00:28:02.109 "traddr": "10.0.0.1", 00:28:02.109 "adrfam": "ipv4", 00:28:02.109 "trsvcid": "4420", 00:28:02.109 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:02.109 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:02.109 "prchk_reftag": false, 00:28:02.109 "prchk_guard": false, 00:28:02.109 "hdgst": false, 00:28:02.109 "ddgst": false, 00:28:02.109 "method": "bdev_nvme_attach_controller", 00:28:02.109 "req_id": 1 00:28:02.109 } 00:28:02.109 Got JSON-RPC error response 00:28:02.109 response: 00:28:02.109 { 00:28:02.109 "code": -5, 00:28:02.109 "message": "Input/output error" 00:28:02.109 } 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.109 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.109 request: 00:28:02.109 { 00:28:02.109 "name": "nvme0", 00:28:02.109 "trtype": "tcp", 00:28:02.109 "traddr": "10.0.0.1", 00:28:02.109 "adrfam": "ipv4", 00:28:02.109 "trsvcid": "4420", 00:28:02.109 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:02.109 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:02.109 "prchk_reftag": false, 00:28:02.109 "prchk_guard": false, 00:28:02.109 "hdgst": false, 00:28:02.109 "ddgst": false, 00:28:02.109 "dhchap_key": "key2", 00:28:02.109 "method": "bdev_nvme_attach_controller", 00:28:02.109 "req_id": 1 00:28:02.109 } 00:28:02.109 Got JSON-RPC error response 00:28:02.109 response: 00:28:02.109 { 00:28:02.109 "code": -5, 00:28:02.109 "message": "Input/output error" 00:28:02.109 } 00:28:02.110 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:02.110 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:02.110 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:02.110 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:02.110 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:02.110 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.110 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:02.110 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.110 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.110 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.370 request: 00:28:02.370 { 00:28:02.370 "name": "nvme0", 00:28:02.370 "trtype": "tcp", 00:28:02.370 "traddr": "10.0.0.1", 00:28:02.370 "adrfam": "ipv4", 00:28:02.370 "trsvcid": "4420", 00:28:02.370 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:02.370 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:02.370 "prchk_reftag": false, 00:28:02.370 "prchk_guard": false, 00:28:02.370 "hdgst": false, 00:28:02.370 "ddgst": false, 00:28:02.370 "dhchap_key": "key1", 00:28:02.370 "dhchap_ctrlr_key": "ckey2", 00:28:02.370 "method": "bdev_nvme_attach_controller", 00:28:02.370 "req_id": 1 00:28:02.370 } 00:28:02.370 Got JSON-RPC error response 00:28:02.370 response: 00:28:02.370 { 00:28:02.370 "code": -5, 00:28:02.370 "message": "Input/output error" 00:28:02.370 } 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:02.370 rmmod nvme_tcp 00:28:02.370 rmmod nvme_fabrics 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2927729 ']' 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2927729 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2927729 ']' 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2927729 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2927729 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2927729' 00:28:02.370 killing process with pid 2927729 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2927729 00:28:02.370 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2927729 00:28:02.630 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:02.630 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:02.630 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:02.630 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:02.630 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:02.630 22:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.630 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.630 22:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.545 22:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:04.545 22:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:04.545 22:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:04.545 22:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:04.545 22:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:04.545 22:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:04.545 22:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:04.545 22:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:04.545 22:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:04.545 22:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:04.545 22:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:04.545 22:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:04.806 22:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:08.110 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:08.110 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:08.110 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:08.110 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:08.110 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:08.110 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:08.110 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:08.110 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:08.110 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:08.110 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:08.110 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:08.110 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:08.110 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:08.110 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:08.110 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:08.110 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:08.370 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:08.636 22:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.AUx /tmp/spdk.key-null.h7Q /tmp/spdk.key-sha256.305 /tmp/spdk.key-sha384.45P /tmp/spdk.key-sha512.mIT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:08.636 22:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:11.938 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:11.938 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:11.938 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:11.938 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:11.938 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:11.938 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:11.938 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:11.938 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:11.938 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:11.938 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:11.938 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:11.938 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:11.938 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:11.938 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:11.938 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:11.938 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:11.938 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:11.938 00:28:11.938 real 0m57.715s 00:28:11.938 user 0m51.118s 00:28:11.938 sys 0m14.593s 00:28:11.938 22:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:11.938 22:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.938 ************************************ 00:28:11.938 END TEST nvmf_auth_host 00:28:11.938 ************************************ 00:28:11.938 22:25:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:11.938 22:25:37 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:11.938 22:25:37 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:11.938 22:25:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:11.938 22:25:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:11.938 22:25:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:11.938 ************************************ 00:28:11.938 START TEST nvmf_digest 00:28:11.938 ************************************ 00:28:11.938 22:25:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:12.200 * Looking for test storage... 00:28:12.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.200 22:25:37 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:12.201 22:25:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:18.789 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:18.789 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:18.789 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:18.789 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:18.789 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.050 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.050 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.050 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:19.050 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.050 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.050 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.050 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:19.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:28:19.050 00:28:19.050 --- 10.0.0.2 ping statistics --- 00:28:19.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.050 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:28:19.050 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:28:19.311 00:28:19.311 --- 10.0.0.1 ping statistics --- 00:28:19.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.311 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:19.311 ************************************ 00:28:19.311 START TEST nvmf_digest_clean 00:28:19.311 ************************************ 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2944172 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2944172 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2944172 ']' 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:19.311 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:19.311 [2024-07-15 22:25:44.505016] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:28:19.311 [2024-07-15 22:25:44.505064] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.311 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.311 [2024-07-15 22:25:44.573172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.572 [2024-07-15 22:25:44.636262] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.572 [2024-07-15 22:25:44.636296] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.572 [2024-07-15 22:25:44.636303] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.572 [2024-07-15 22:25:44.636310] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.572 [2024-07-15 22:25:44.636316] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.572 [2024-07-15 22:25:44.636336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.572 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:19.572 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:19.572 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:19.572 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:19.572 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:19.572 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.572 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:19.572 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:19.572 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:19.572 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.572 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:19.572 null0 00:28:19.572 [2024-07-15 22:25:44.785682] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.572 [2024-07-15 22:25:44.809868] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.572 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.573 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:19.573 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:19.573 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:19.573 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:19.573 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:19.573 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:19.573 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:19.573 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2944240 00:28:19.573 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2944240 /var/tmp/bperf.sock 00:28:19.573 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2944240 ']' 00:28:19.573 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:19.573 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:19.573 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:19.573 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:19.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:19.573 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:19.573 22:25:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:19.573 [2024-07-15 22:25:44.865643] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:28:19.573 [2024-07-15 22:25:44.865688] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2944240 ] 00:28:19.573 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.834 [2024-07-15 22:25:44.940940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.834 [2024-07-15 22:25:45.005019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.405 22:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:20.405 22:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:20.405 22:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:20.405 22:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:20.405 22:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:20.666 22:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.666 22:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.926 nvme0n1 00:28:20.926 22:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:20.926 22:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:20.926 Running I/O for 2 seconds... 00:28:22.840 00:28:22.840 Latency(us) 00:28:22.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.840 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:22.840 nvme0n1 : 2.00 20812.62 81.30 0.00 0.00 6142.42 2894.51 13216.43 00:28:22.840 =================================================================================================================== 00:28:22.840 Total : 20812.62 81.30 0.00 0.00 6142.42 2894.51 13216.43 00:28:22.840 0 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:23.101 | select(.opcode=="crc32c") 00:28:23.101 | "\(.module_name) \(.executed)"' 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2944240 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2944240 ']' 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2944240 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2944240 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2944240' 00:28:23.101 killing process with pid 2944240 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2944240 00:28:23.101 Received shutdown signal, test time was about 2.000000 seconds 00:28:23.101 00:28:23.101 Latency(us) 00:28:23.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.101 =================================================================================================================== 00:28:23.101 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.101 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2944240 00:28:23.362 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:23.362 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:23.362 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:23.362 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:23.362 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:23.362 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:23.362 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:23.362 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2945006 00:28:23.362 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2945006 /var/tmp/bperf.sock 00:28:23.362 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2945006 ']' 00:28:23.362 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:23.362 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:23.362 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:23.362 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:23.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:23.362 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:23.362 22:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:23.362 [2024-07-15 22:25:48.577484] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:28:23.362 [2024-07-15 22:25:48.577541] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2945006 ] 00:28:23.362 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:23.362 Zero copy mechanism will not be used. 00:28:23.362 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.362 [2024-07-15 22:25:48.651979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.623 [2024-07-15 22:25:48.715748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.193 22:25:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:24.194 22:25:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:24.194 22:25:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:24.194 22:25:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:24.194 22:25:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:24.455 22:25:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.455 22:25:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.715 nvme0n1 00:28:24.715 22:25:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:24.715 22:25:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:24.715 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:24.715 Zero copy mechanism will not be used. 00:28:24.715 Running I/O for 2 seconds... 00:28:26.682 00:28:26.682 Latency(us) 00:28:26.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.682 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:26.682 nvme0n1 : 2.01 2095.73 261.97 0.00 0.00 7632.06 2211.84 14745.60 00:28:26.682 =================================================================================================================== 00:28:26.682 Total : 2095.73 261.97 0.00 0.00 7632.06 2211.84 14745.60 00:28:26.682 0 00:28:26.682 22:25:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:26.682 22:25:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:26.682 22:25:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:26.682 22:25:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:26.682 | select(.opcode=="crc32c") 00:28:26.682 | "\(.module_name) \(.executed)"' 00:28:26.682 22:25:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:26.944 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:26.944 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:26.944 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:26.944 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:26.944 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2945006 00:28:26.944 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2945006 ']' 00:28:26.944 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2945006 00:28:26.944 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:26.944 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:26.944 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2945006 00:28:26.944 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:26.944 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:26.944 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2945006' 00:28:26.944 killing process with pid 2945006 00:28:26.944 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2945006 00:28:26.944 Received shutdown signal, test time was about 2.000000 seconds 00:28:26.944 00:28:26.944 Latency(us) 00:28:26.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.944 =================================================================================================================== 00:28:26.944 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:26.944 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2945006 00:28:27.205 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:27.205 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:27.205 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:27.205 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:27.205 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:27.205 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:27.205 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:27.205 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2945782 00:28:27.205 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2945782 /var/tmp/bperf.sock 00:28:27.205 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2945782 ']' 00:28:27.205 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:27.205 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:27.205 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:27.205 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:27.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:27.205 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:27.205 22:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:27.205 [2024-07-15 22:25:52.332707] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:28:27.205 [2024-07-15 22:25:52.332809] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2945782 ] 00:28:27.205 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.205 [2024-07-15 22:25:52.410165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.205 [2024-07-15 22:25:52.463663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.776 22:25:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:27.776 22:25:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:27.776 22:25:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:27.776 22:25:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:27.776 22:25:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:28.037 22:25:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.037 22:25:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.608 nvme0n1 00:28:28.608 22:25:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:28.608 22:25:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:28.608 Running I/O for 2 seconds... 00:28:30.522 00:28:30.522 Latency(us) 00:28:30.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.522 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:30.522 nvme0n1 : 2.01 21840.23 85.31 0.00 0.00 5852.76 4068.69 12834.13 00:28:30.522 =================================================================================================================== 00:28:30.522 Total : 21840.23 85.31 0.00 0.00 5852.76 4068.69 12834.13 00:28:30.522 0 00:28:30.522 22:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:30.522 22:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:30.522 22:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:30.522 22:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:30.522 | select(.opcode=="crc32c") 00:28:30.523 | "\(.module_name) \(.executed)"' 00:28:30.523 22:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:30.783 22:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:30.783 22:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:30.784 22:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:30.784 22:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:30.784 22:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2945782 00:28:30.784 22:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2945782 ']' 00:28:30.784 22:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2945782 00:28:30.784 22:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:30.784 22:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:30.784 22:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2945782 00:28:30.784 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:30.784 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:30.784 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2945782' 00:28:30.784 killing process with pid 2945782 00:28:30.784 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2945782 00:28:30.784 Received shutdown signal, test time was about 2.000000 seconds 00:28:30.784 00:28:30.784 Latency(us) 00:28:30.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.784 =================================================================================================================== 00:28:30.784 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.784 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2945782 00:28:31.044 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:31.044 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:31.044 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:31.044 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:31.044 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:31.044 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:31.044 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:31.044 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2946559 00:28:31.044 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2946559 /var/tmp/bperf.sock 00:28:31.044 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2946559 ']' 00:28:31.044 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:31.044 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:31.044 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:31.044 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:31.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:31.044 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:31.044 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.044 [2024-07-15 22:25:56.186383] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:28:31.044 [2024-07-15 22:25:56.186444] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2946559 ] 00:28:31.044 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:31.044 Zero copy mechanism will not be used. 00:28:31.044 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.044 [2024-07-15 22:25:56.262309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.044 [2024-07-15 22:25:56.315027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.989 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:31.989 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:31.989 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:31.989 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:31.989 22:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:31.989 22:25:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:31.989 22:25:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.249 nvme0n1 00:28:32.249 22:25:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:32.249 22:25:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:32.510 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:32.510 Zero copy mechanism will not be used. 00:28:32.510 Running I/O for 2 seconds... 00:28:34.425 00:28:34.425 Latency(us) 00:28:34.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.425 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:34.426 nvme0n1 : 2.01 2942.43 367.80 0.00 0.00 5428.18 3959.47 20316.16 00:28:34.426 =================================================================================================================== 00:28:34.426 Total : 2942.43 367.80 0.00 0.00 5428.18 3959.47 20316.16 00:28:34.426 0 00:28:34.426 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:34.426 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:34.426 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:34.426 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:34.426 | select(.opcode=="crc32c") 00:28:34.426 | "\(.module_name) \(.executed)"' 00:28:34.426 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2946559 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2946559 ']' 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2946559 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2946559 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2946559' 00:28:34.687 killing process with pid 2946559 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2946559 00:28:34.687 Received shutdown signal, test time was about 2.000000 seconds 00:28:34.687 00:28:34.687 Latency(us) 00:28:34.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.687 =================================================================================================================== 00:28:34.687 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2946559 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2944172 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2944172 ']' 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2944172 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:34.687 22:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2944172 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2944172' 00:28:34.948 killing process with pid 2944172 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2944172 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2944172 00:28:34.948 00:28:34.948 real 0m15.711s 00:28:34.948 user 0m31.538s 00:28:34.948 sys 0m3.109s 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:34.948 ************************************ 00:28:34.948 END TEST nvmf_digest_clean 00:28:34.948 ************************************ 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:34.948 ************************************ 00:28:34.948 START TEST nvmf_digest_error 00:28:34.948 ************************************ 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2947274 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2947274 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2947274 ']' 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:34.948 22:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:35.209 [2024-07-15 22:26:00.286073] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:28:35.209 [2024-07-15 22:26:00.286127] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.209 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.209 [2024-07-15 22:26:00.351183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.209 [2024-07-15 22:26:00.417198] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.209 [2024-07-15 22:26:00.417232] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.209 [2024-07-15 22:26:00.417243] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:35.209 [2024-07-15 22:26:00.417249] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:35.209 [2024-07-15 22:26:00.417255] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.209 [2024-07-15 22:26:00.417272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.780 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:35.780 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:35.780 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:35.780 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:35.780 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:35.780 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.780 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:35.780 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.780 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:35.780 [2024-07-15 22:26:01.083178] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:35.780 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.780 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:35.780 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:35.780 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.780 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.041 null0 00:28:36.041 [2024-07-15 22:26:01.163917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.041 [2024-07-15 22:26:01.188110] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.041 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.041 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:36.041 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:36.041 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:36.041 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:36.041 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:36.041 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2947557 00:28:36.041 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2947557 /var/tmp/bperf.sock 00:28:36.041 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2947557 ']' 00:28:36.041 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:36.041 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:36.041 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:36.041 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:36.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:36.041 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:36.041 22:26:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.041 [2024-07-15 22:26:01.240286] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:28:36.041 [2024-07-15 22:26:01.240337] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947557 ] 00:28:36.041 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.041 [2024-07-15 22:26:01.312376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.302 [2024-07-15 22:26:01.366265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.873 22:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:36.873 22:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:36.873 22:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:36.873 22:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:36.873 22:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:36.873 22:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.873 22:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.873 22:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.873 22:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.873 22:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.133 nvme0n1 00:28:37.395 22:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:37.395 22:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.395 22:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.395 22:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.395 22:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:37.395 22:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:37.395 Running I/O for 2 seconds... 00:28:37.395 [2024-07-15 22:26:02.587337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.395 [2024-07-15 22:26:02.587365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.395 [2024-07-15 22:26:02.587374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.395 [2024-07-15 22:26:02.600515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.395 [2024-07-15 22:26:02.600535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.395 [2024-07-15 22:26:02.600542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.395 [2024-07-15 22:26:02.612509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.395 [2024-07-15 22:26:02.612527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.395 [2024-07-15 22:26:02.612534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.395 [2024-07-15 22:26:02.624626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.395 [2024-07-15 22:26:02.624644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.395 [2024-07-15 22:26:02.624651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.395 [2024-07-15 22:26:02.636863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.395 [2024-07-15 22:26:02.636880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.395 [2024-07-15 22:26:02.636887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.395 [2024-07-15 22:26:02.649048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.395 [2024-07-15 22:26:02.649066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.395 [2024-07-15 22:26:02.649072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.395 [2024-07-15 22:26:02.661194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.395 [2024-07-15 22:26:02.661212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.395 [2024-07-15 22:26:02.661218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.395 [2024-07-15 22:26:02.673936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.395 [2024-07-15 22:26:02.673953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.395 [2024-07-15 22:26:02.673960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.396 [2024-07-15 22:26:02.686007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.396 [2024-07-15 22:26:02.686024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.396 [2024-07-15 22:26:02.686030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.396 [2024-07-15 22:26:02.697944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.396 [2024-07-15 22:26:02.697961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.396 [2024-07-15 22:26:02.697968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.396 [2024-07-15 22:26:02.710864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.396 [2024-07-15 22:26:02.710882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.396 [2024-07-15 22:26:02.710888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.657 [2024-07-15 22:26:02.723010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.657 [2024-07-15 22:26:02.723027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.657 [2024-07-15 22:26:02.723037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.657 [2024-07-15 22:26:02.734942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.657 [2024-07-15 22:26:02.734959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.657 [2024-07-15 22:26:02.734965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.657 [2024-07-15 22:26:02.746279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.657 [2024-07-15 22:26:02.746296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.657 [2024-07-15 22:26:02.746303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.657 [2024-07-15 22:26:02.759822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.759840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.759846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.772655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.772672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.772678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.784529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.784546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.784552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.796748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.796766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.796772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.808180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.808197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.808203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.820727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.820745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.820751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.834078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.834098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.834104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.846269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.846285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.846291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.858448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.858464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.858470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.870818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.870834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.870841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.882367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.882384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.882391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.894864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.894880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.894887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.907597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.907614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.907620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.919584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.919600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.919606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.932993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.933010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.933016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.944806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.944823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.944829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.957178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.957195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.957201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.658 [2024-07-15 22:26:02.970007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.658 [2024-07-15 22:26:02.970024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.658 [2024-07-15 22:26:02.970031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.919 [2024-07-15 22:26:02.981856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.919 [2024-07-15 22:26:02.981873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.919 [2024-07-15 22:26:02.981879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.919 [2024-07-15 22:26:02.994142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.919 [2024-07-15 22:26:02.994159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.919 [2024-07-15 22:26:02.994165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.919 [2024-07-15 22:26:03.006146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.919 [2024-07-15 22:26:03.006162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.919 [2024-07-15 22:26:03.006169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.919 [2024-07-15 22:26:03.018927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.919 [2024-07-15 22:26:03.018944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.919 [2024-07-15 22:26:03.018951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.919 [2024-07-15 22:26:03.031055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.919 [2024-07-15 22:26:03.031072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.919 [2024-07-15 22:26:03.031078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.919 [2024-07-15 22:26:03.043140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.919 [2024-07-15 22:26:03.043157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.919 [2024-07-15 22:26:03.043167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.919 [2024-07-15 22:26:03.054987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.919 [2024-07-15 22:26:03.055004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.919 [2024-07-15 22:26:03.055010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.919 [2024-07-15 22:26:03.067929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.919 [2024-07-15 22:26:03.067946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.919 [2024-07-15 22:26:03.067953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.919 [2024-07-15 22:26:03.080487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.919 [2024-07-15 22:26:03.080504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.919 [2024-07-15 22:26:03.080510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.919 [2024-07-15 22:26:03.093120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.919 [2024-07-15 22:26:03.093139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.919 [2024-07-15 22:26:03.093146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.919 [2024-07-15 22:26:03.105473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.919 [2024-07-15 22:26:03.105490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.919 [2024-07-15 22:26:03.105496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.919 [2024-07-15 22:26:03.117137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.920 [2024-07-15 22:26:03.117153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.920 [2024-07-15 22:26:03.117160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.920 [2024-07-15 22:26:03.129336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.920 [2024-07-15 22:26:03.129353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.920 [2024-07-15 22:26:03.129359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.920 [2024-07-15 22:26:03.141585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.920 [2024-07-15 22:26:03.141602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.920 [2024-07-15 22:26:03.141608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.920 [2024-07-15 22:26:03.153927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.920 [2024-07-15 22:26:03.153944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.920 [2024-07-15 22:26:03.153951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.920 [2024-07-15 22:26:03.166132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.920 [2024-07-15 22:26:03.166149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.920 [2024-07-15 22:26:03.166155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.920 [2024-07-15 22:26:03.179387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.920 [2024-07-15 22:26:03.179403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.920 [2024-07-15 22:26:03.179409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.920 [2024-07-15 22:26:03.191302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.920 [2024-07-15 22:26:03.191318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.920 [2024-07-15 22:26:03.191324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.920 [2024-07-15 22:26:03.202949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.920 [2024-07-15 22:26:03.202966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.920 [2024-07-15 22:26:03.202972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.920 [2024-07-15 22:26:03.216518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.920 [2024-07-15 22:26:03.216534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.920 [2024-07-15 22:26:03.216540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.920 [2024-07-15 22:26:03.227381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.920 [2024-07-15 22:26:03.227398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.920 [2024-07-15 22:26:03.227404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.920 [2024-07-15 22:26:03.238889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:37.920 [2024-07-15 22:26:03.238906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.920 [2024-07-15 22:26:03.238912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.252167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.252184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.252193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.263752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.263768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.263775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.276876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.276892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.276898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.289439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.289455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.289462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.301449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.301465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.301471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.313504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.313521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.313527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.325594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.325610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.325616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.337731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.337747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.337753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.349743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.349760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.349766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.362525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.362545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.362551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.374754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.374771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.374777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.386672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.386688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.386694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.398814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.398830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.398836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.410663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.410680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.410686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.422463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.422480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.422486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.435093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.435108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.435114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.448422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.448438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.448445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.460695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.460711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.460717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.472796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.472812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.472818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.484987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.485003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.485009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.180 [2024-07-15 22:26:03.497150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.180 [2024-07-15 22:26:03.497166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.180 [2024-07-15 22:26:03.497172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.439 [2024-07-15 22:26:03.508747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.439 [2024-07-15 22:26:03.508763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.439 [2024-07-15 22:26:03.508769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.439 [2024-07-15 22:26:03.521145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.439 [2024-07-15 22:26:03.521162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.439 [2024-07-15 22:26:03.521168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.439 [2024-07-15 22:26:03.533460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.439 [2024-07-15 22:26:03.533476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.439 [2024-07-15 22:26:03.533482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.439 [2024-07-15 22:26:03.546160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.439 [2024-07-15 22:26:03.546176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.439 [2024-07-15 22:26:03.546183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.439 [2024-07-15 22:26:03.560033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.439 [2024-07-15 22:26:03.560050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.439 [2024-07-15 22:26:03.560056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.439 [2024-07-15 22:26:03.570201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.439 [2024-07-15 22:26:03.570217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.439 [2024-07-15 22:26:03.570227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.439 [2024-07-15 22:26:03.583709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.439 [2024-07-15 22:26:03.583725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.439 [2024-07-15 22:26:03.583731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.439 [2024-07-15 22:26:03.595824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.440 [2024-07-15 22:26:03.595840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.440 [2024-07-15 22:26:03.595846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.440 [2024-07-15 22:26:03.607891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.440 [2024-07-15 22:26:03.607908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.440 [2024-07-15 22:26:03.607914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.440 [2024-07-15 22:26:03.620381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.440 [2024-07-15 22:26:03.620398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.440 [2024-07-15 22:26:03.620404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.440 [2024-07-15 22:26:03.633999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.440 [2024-07-15 22:26:03.634015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.440 [2024-07-15 22:26:03.634021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.440 [2024-07-15 22:26:03.645901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.440 [2024-07-15 22:26:03.645917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.440 [2024-07-15 22:26:03.645923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.440 [2024-07-15 22:26:03.657037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.440 [2024-07-15 22:26:03.657054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.440 [2024-07-15 22:26:03.657060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.440 [2024-07-15 22:26:03.670432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.440 [2024-07-15 22:26:03.670449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.440 [2024-07-15 22:26:03.670455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.440 [2024-07-15 22:26:03.683315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.440 [2024-07-15 22:26:03.683334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.440 [2024-07-15 22:26:03.683341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.440 [2024-07-15 22:26:03.694909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.440 [2024-07-15 22:26:03.694925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.440 [2024-07-15 22:26:03.694931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.440 [2024-07-15 22:26:03.707117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.440 [2024-07-15 22:26:03.707137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.440 [2024-07-15 22:26:03.707143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.440 [2024-07-15 22:26:03.718931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.440 [2024-07-15 22:26:03.718948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.440 [2024-07-15 22:26:03.718954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.440 [2024-07-15 22:26:03.731512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.440 [2024-07-15 22:26:03.731528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.440 [2024-07-15 22:26:03.731535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.440 [2024-07-15 22:26:03.744423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.440 [2024-07-15 22:26:03.744440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.440 [2024-07-15 22:26:03.744446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.440 [2024-07-15 22:26:03.757279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.440 [2024-07-15 22:26:03.757296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.440 [2024-07-15 22:26:03.757302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.768047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.768063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.768070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.780303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.780320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.780326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.792966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.792982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.792989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.804880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.804896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.804902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.818874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.818891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.818898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.829591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.829607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.829614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.842196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.842213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.842219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.854197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.854214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.854220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.866962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.866978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.866984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.877738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.877755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.877761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.891393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.891412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.891418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.903055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.903071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.903078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.915410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.915427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.915433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.928284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.928300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.928307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.940559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.940575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.940581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.952335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.952352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.952358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.964676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.964692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.964699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.977267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.977283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.977289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:03.989452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:03.989468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:03.989475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:04.001679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:04.001695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:04.001702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.700 [2024-07-15 22:26:04.013727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.700 [2024-07-15 22:26:04.013743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.700 [2024-07-15 22:26:04.013750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.960 [2024-07-15 22:26:04.026112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.960 [2024-07-15 22:26:04.026132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-07-15 22:26:04.026138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.960 [2024-07-15 22:26:04.038756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.960 [2024-07-15 22:26:04.038772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-07-15 22:26:04.038779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.960 [2024-07-15 22:26:04.051799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.960 [2024-07-15 22:26:04.051815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-07-15 22:26:04.051821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.960 [2024-07-15 22:26:04.062798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.960 [2024-07-15 22:26:04.062814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-07-15 22:26:04.062821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.960 [2024-07-15 22:26:04.075472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.960 [2024-07-15 22:26:04.075490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-07-15 22:26:04.075496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.960 [2024-07-15 22:26:04.087751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.960 [2024-07-15 22:26:04.087767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-07-15 22:26:04.087773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.960 [2024-07-15 22:26:04.099974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.961 [2024-07-15 22:26:04.099990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.961 [2024-07-15 22:26:04.100000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.961 [2024-07-15 22:26:04.112469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.961 [2024-07-15 22:26:04.112486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.961 [2024-07-15 22:26:04.112492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.961 [2024-07-15 22:26:04.124522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.961 [2024-07-15 22:26:04.124540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.961 [2024-07-15 22:26:04.124546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.961 [2024-07-15 22:26:04.137473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.961 [2024-07-15 22:26:04.137491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.961 [2024-07-15 22:26:04.137498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.961 [2024-07-15 22:26:04.149156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.961 [2024-07-15 22:26:04.149173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.961 [2024-07-15 22:26:04.149179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.961 [2024-07-15 22:26:04.161715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.961 [2024-07-15 22:26:04.161732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.961 [2024-07-15 22:26:04.161738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.961 [2024-07-15 22:26:04.173754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.961 [2024-07-15 22:26:04.173772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.961 [2024-07-15 22:26:04.173780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.961 [2024-07-15 22:26:04.185725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.961 [2024-07-15 22:26:04.185741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.961 [2024-07-15 22:26:04.185747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.961 [2024-07-15 22:26:04.198129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.961 [2024-07-15 22:26:04.198145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.961 [2024-07-15 22:26:04.198151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.961 [2024-07-15 22:26:04.210311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.961 [2024-07-15 22:26:04.210332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.961 [2024-07-15 22:26:04.210338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.961 [2024-07-15 22:26:04.222291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.961 [2024-07-15 22:26:04.222308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.961 [2024-07-15 22:26:04.222314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.961 [2024-07-15 22:26:04.236016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.961 [2024-07-15 22:26:04.236033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.961 [2024-07-15 22:26:04.236039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.961 [2024-07-15 22:26:04.248274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.961 [2024-07-15 22:26:04.248290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.961 [2024-07-15 22:26:04.248296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.961 [2024-07-15 22:26:04.259486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.961 [2024-07-15 22:26:04.259502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.961 [2024-07-15 22:26:04.259508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.961 [2024-07-15 22:26:04.272896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:38.961 [2024-07-15 22:26:04.272912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.961 [2024-07-15 22:26:04.272919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.221 [2024-07-15 22:26:04.284702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.221 [2024-07-15 22:26:04.284718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.221 [2024-07-15 22:26:04.284724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.221 [2024-07-15 22:26:04.296664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.221 [2024-07-15 22:26:04.296680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.221 [2024-07-15 22:26:04.296686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.221 [2024-07-15 22:26:04.309511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.221 [2024-07-15 22:26:04.309527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.221 [2024-07-15 22:26:04.309533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.221 [2024-07-15 22:26:04.320552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.221 [2024-07-15 22:26:04.320568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.221 [2024-07-15 22:26:04.320574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.221 [2024-07-15 22:26:04.333720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.221 [2024-07-15 22:26:04.333737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.221 [2024-07-15 22:26:04.333743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.221 [2024-07-15 22:26:04.345986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.221 [2024-07-15 22:26:04.346003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.221 [2024-07-15 22:26:04.346009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.221 [2024-07-15 22:26:04.358201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.221 [2024-07-15 22:26:04.358218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.221 [2024-07-15 22:26:04.358224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.221 [2024-07-15 22:26:04.370531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.221 [2024-07-15 22:26:04.370549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.221 [2024-07-15 22:26:04.370555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.221 [2024-07-15 22:26:04.384669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.221 [2024-07-15 22:26:04.384686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.221 [2024-07-15 22:26:04.384692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.221 [2024-07-15 22:26:04.396318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.221 [2024-07-15 22:26:04.396334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.221 [2024-07-15 22:26:04.396340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.221 [2024-07-15 22:26:04.408622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.221 [2024-07-15 22:26:04.408638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-07-15 22:26:04.408644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.222 [2024-07-15 22:26:04.421214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.222 [2024-07-15 22:26:04.421231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-07-15 22:26:04.421240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.222 [2024-07-15 22:26:04.432876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.222 [2024-07-15 22:26:04.432893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-07-15 22:26:04.432900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.222 [2024-07-15 22:26:04.445528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.222 [2024-07-15 22:26:04.445545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-07-15 22:26:04.445551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.222 [2024-07-15 22:26:04.457845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.222 [2024-07-15 22:26:04.457862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-07-15 22:26:04.457868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.222 [2024-07-15 22:26:04.469594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.222 [2024-07-15 22:26:04.469610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-07-15 22:26:04.469617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.222 [2024-07-15 22:26:04.482619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.222 [2024-07-15 22:26:04.482636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-07-15 22:26:04.482642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.222 [2024-07-15 22:26:04.493957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.222 [2024-07-15 22:26:04.493974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-07-15 22:26:04.493980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.222 [2024-07-15 22:26:04.506409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.222 [2024-07-15 22:26:04.506426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-07-15 22:26:04.506432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.222 [2024-07-15 22:26:04.518144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.222 [2024-07-15 22:26:04.518160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-07-15 22:26:04.518166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.222 [2024-07-15 22:26:04.530360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.222 [2024-07-15 22:26:04.530380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-07-15 22:26:04.530386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.222 [2024-07-15 22:26:04.542411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.222 [2024-07-15 22:26:04.542427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-07-15 22:26:04.542434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.482 [2024-07-15 22:26:04.555230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.482 [2024-07-15 22:26:04.555246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.482 [2024-07-15 22:26:04.555253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.482 [2024-07-15 22:26:04.567805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d78e0) 00:28:39.482 [2024-07-15 22:26:04.567822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.482 [2024-07-15 22:26:04.567828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.482 00:28:39.482 Latency(us) 00:28:39.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.482 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:39.482 nvme0n1 : 2.04 20288.46 79.25 0.00 0.00 6211.41 3850.24 52647.25 00:28:39.482 =================================================================================================================== 00:28:39.482 Total : 20288.46 79.25 0.00 0.00 6211.41 3850.24 52647.25 00:28:39.482 0 00:28:39.482 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:39.482 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:39.482 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:39.482 | .driver_specific 00:28:39.482 | .nvme_error 00:28:39.482 | .status_code 00:28:39.482 | .command_transient_transport_error' 00:28:39.482 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:39.482 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:28:39.482 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2947557 00:28:39.482 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2947557 ']' 00:28:39.482 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2947557 00:28:39.482 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:39.482 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:39.482 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2947557 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2947557' 00:28:39.766 killing process with pid 2947557 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2947557 00:28:39.766 Received shutdown signal, test time was about 2.000000 seconds 00:28:39.766 00:28:39.766 Latency(us) 00:28:39.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.766 =================================================================================================================== 00:28:39.766 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2947557 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2948307 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2948307 /var/tmp/bperf.sock 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2948307 ']' 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:39.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:39.766 22:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.766 [2024-07-15 22:26:05.013388] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:28:39.766 [2024-07-15 22:26:05.013446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2948307 ] 00:28:39.766 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:39.766 Zero copy mechanism will not be used. 00:28:39.766 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.766 [2024-07-15 22:26:05.087491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.025 [2024-07-15 22:26:05.139857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.591 22:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:40.591 22:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:40.591 22:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:40.591 22:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:40.851 22:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:40.851 22:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.851 22:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.851 22:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.851 22:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.851 22:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:41.111 nvme0n1 00:28:41.111 22:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:41.111 22:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.111 22:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.111 22:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.111 22:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:41.111 22:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:41.111 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:41.111 Zero copy mechanism will not be used. 00:28:41.111 Running I/O for 2 seconds... 00:28:41.111 [2024-07-15 22:26:06.325851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.111 [2024-07-15 22:26:06.325884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.111 [2024-07-15 22:26:06.325893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.111 [2024-07-15 22:26:06.342937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.111 [2024-07-15 22:26:06.342958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.111 [2024-07-15 22:26:06.342966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.111 [2024-07-15 22:26:06.357905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.111 [2024-07-15 22:26:06.357923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.111 [2024-07-15 22:26:06.357930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.111 [2024-07-15 22:26:06.373387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.111 [2024-07-15 22:26:06.373405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.111 [2024-07-15 22:26:06.373412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.111 [2024-07-15 22:26:06.389744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.111 [2024-07-15 22:26:06.389762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.111 [2024-07-15 22:26:06.389768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.111 [2024-07-15 22:26:06.405722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.111 [2024-07-15 22:26:06.405740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.111 [2024-07-15 22:26:06.405750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.111 [2024-07-15 22:26:06.418948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.111 [2024-07-15 22:26:06.418966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.111 [2024-07-15 22:26:06.418972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.111 [2024-07-15 22:26:06.435496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.111 [2024-07-15 22:26:06.435513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.111 [2024-07-15 22:26:06.435519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.371 [2024-07-15 22:26:06.452201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.371 [2024-07-15 22:26:06.452218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.371 [2024-07-15 22:26:06.452224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.371 [2024-07-15 22:26:06.468037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.371 [2024-07-15 22:26:06.468055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.371 [2024-07-15 22:26:06.468062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.371 [2024-07-15 22:26:06.484131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.371 [2024-07-15 22:26:06.484149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.371 [2024-07-15 22:26:06.484155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.371 [2024-07-15 22:26:06.500657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.372 [2024-07-15 22:26:06.500674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.372 [2024-07-15 22:26:06.500681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.372 [2024-07-15 22:26:06.515713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.372 [2024-07-15 22:26:06.515731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.372 [2024-07-15 22:26:06.515737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.372 [2024-07-15 22:26:06.531740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.372 [2024-07-15 22:26:06.531758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.372 [2024-07-15 22:26:06.531764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.372 [2024-07-15 22:26:06.547067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.372 [2024-07-15 22:26:06.547088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.372 [2024-07-15 22:26:06.547094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.372 [2024-07-15 22:26:06.562265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.372 [2024-07-15 22:26:06.562282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.372 [2024-07-15 22:26:06.562288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.372 [2024-07-15 22:26:06.578418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.372 [2024-07-15 22:26:06.578437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.372 [2024-07-15 22:26:06.578443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.372 [2024-07-15 22:26:06.595347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.372 [2024-07-15 22:26:06.595364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.372 [2024-07-15 22:26:06.595371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.372 [2024-07-15 22:26:06.610892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.372 [2024-07-15 22:26:06.610909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.372 [2024-07-15 22:26:06.610916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.372 [2024-07-15 22:26:06.625153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.372 [2024-07-15 22:26:06.625170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.372 [2024-07-15 22:26:06.625176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.372 [2024-07-15 22:26:06.641394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.372 [2024-07-15 22:26:06.641410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.372 [2024-07-15 22:26:06.641416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.372 [2024-07-15 22:26:06.656888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.372 [2024-07-15 22:26:06.656906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.372 [2024-07-15 22:26:06.656912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.372 [2024-07-15 22:26:06.671706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.372 [2024-07-15 22:26:06.671723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.372 [2024-07-15 22:26:06.671733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.372 [2024-07-15 22:26:06.687809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.372 [2024-07-15 22:26:06.687826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.372 [2024-07-15 22:26:06.687832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.703938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.703955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.703961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.720396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.720413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.720419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.736441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.736460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.736466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.752350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.752367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.752373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.768028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.768046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.768052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.783639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.783656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.783663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.800348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.800366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.800372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.815988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.816009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.816015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.830690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.830707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.830713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.845963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.845981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.845987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.862934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.862950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.862956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.875021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.875037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.875043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.890052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.890069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.890075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.905912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.905929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.905936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.922864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.922881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.922887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.938411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.938428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.938434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.632 [2024-07-15 22:26:06.955510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.632 [2024-07-15 22:26:06.955527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.632 [2024-07-15 22:26:06.955533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.893 [2024-07-15 22:26:06.972457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.893 [2024-07-15 22:26:06.972474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.893 [2024-07-15 22:26:06.972480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.893 [2024-07-15 22:26:06.987276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.893 [2024-07-15 22:26:06.987293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.893 [2024-07-15 22:26:06.987299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.893 [2024-07-15 22:26:07.001527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.893 [2024-07-15 22:26:07.001544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.893 [2024-07-15 22:26:07.001550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.893 [2024-07-15 22:26:07.017153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.893 [2024-07-15 22:26:07.017170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.893 [2024-07-15 22:26:07.017176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.893 [2024-07-15 22:26:07.032972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.893 [2024-07-15 22:26:07.032989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.893 [2024-07-15 22:26:07.032995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.893 [2024-07-15 22:26:07.049526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.893 [2024-07-15 22:26:07.049543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.893 [2024-07-15 22:26:07.049549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.893 [2024-07-15 22:26:07.065762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.893 [2024-07-15 22:26:07.065779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.893 [2024-07-15 22:26:07.065785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.893 [2024-07-15 22:26:07.080416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.893 [2024-07-15 22:26:07.080433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.893 [2024-07-15 22:26:07.080444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.893 [2024-07-15 22:26:07.097297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.893 [2024-07-15 22:26:07.097314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.893 [2024-07-15 22:26:07.097320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.893 [2024-07-15 22:26:07.111712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.893 [2024-07-15 22:26:07.111729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.893 [2024-07-15 22:26:07.111736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.893 [2024-07-15 22:26:07.126797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.893 [2024-07-15 22:26:07.126813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.893 [2024-07-15 22:26:07.126819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.893 [2024-07-15 22:26:07.142587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.893 [2024-07-15 22:26:07.142604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.893 [2024-07-15 22:26:07.142610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.893 [2024-07-15 22:26:07.158695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.893 [2024-07-15 22:26:07.158711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.893 [2024-07-15 22:26:07.158717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.893 [2024-07-15 22:26:07.174495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.893 [2024-07-15 22:26:07.174512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.893 [2024-07-15 22:26:07.174518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.893 [2024-07-15 22:26:07.189240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.893 [2024-07-15 22:26:07.189257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.893 [2024-07-15 22:26:07.189263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.893 [2024-07-15 22:26:07.206314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:41.893 [2024-07-15 22:26:07.206331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.893 [2024-07-15 22:26:07.206337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.222156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.222176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.222182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.237097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.237115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.237121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.253038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.253054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.253060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.267598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.267615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.267621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.281356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.281372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.281378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.297976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.297993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.297999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.314203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.314220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.314226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.329768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.329785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.329791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.347864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.347882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.347892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.365229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.365246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.365253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.378578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.378595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.378601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.389665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.389683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.389690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.402239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.402256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.402262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.413702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.413720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.413726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.426543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.426560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.426566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.442926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.442943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.442949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.459054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.459070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.459076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.154 [2024-07-15 22:26:07.473670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.154 [2024-07-15 22:26:07.473692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.154 [2024-07-15 22:26:07.473698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.415 [2024-07-15 22:26:07.488596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.415 [2024-07-15 22:26:07.488615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.415 [2024-07-15 22:26:07.488621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.415 [2024-07-15 22:26:07.503636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.415 [2024-07-15 22:26:07.503654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.415 [2024-07-15 22:26:07.503660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.415 [2024-07-15 22:26:07.518128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.415 [2024-07-15 22:26:07.518147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.415 [2024-07-15 22:26:07.518153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.415 [2024-07-15 22:26:07.534269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.415 [2024-07-15 22:26:07.534287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.415 [2024-07-15 22:26:07.534293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.415 [2024-07-15 22:26:07.549655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.415 [2024-07-15 22:26:07.549673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.415 [2024-07-15 22:26:07.549679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.415 [2024-07-15 22:26:07.565529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.415 [2024-07-15 22:26:07.565547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.415 [2024-07-15 22:26:07.565553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.415 [2024-07-15 22:26:07.581694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.415 [2024-07-15 22:26:07.581712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.415 [2024-07-15 22:26:07.581718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.416 [2024-07-15 22:26:07.598406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.416 [2024-07-15 22:26:07.598424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.416 [2024-07-15 22:26:07.598431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.416 [2024-07-15 22:26:07.614649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.416 [2024-07-15 22:26:07.614667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.416 [2024-07-15 22:26:07.614673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.416 [2024-07-15 22:26:07.632036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.416 [2024-07-15 22:26:07.632054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.416 [2024-07-15 22:26:07.632060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.416 [2024-07-15 22:26:07.645235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.416 [2024-07-15 22:26:07.645253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.416 [2024-07-15 22:26:07.645259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.416 [2024-07-15 22:26:07.660529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.416 [2024-07-15 22:26:07.660547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.416 [2024-07-15 22:26:07.660553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.416 [2024-07-15 22:26:07.676352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.416 [2024-07-15 22:26:07.676370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.416 [2024-07-15 22:26:07.676377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.416 [2024-07-15 22:26:07.692527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.416 [2024-07-15 22:26:07.692545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.416 [2024-07-15 22:26:07.692552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.416 [2024-07-15 22:26:07.709117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.416 [2024-07-15 22:26:07.709140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.416 [2024-07-15 22:26:07.709146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.416 [2024-07-15 22:26:07.724498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.416 [2024-07-15 22:26:07.724516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.416 [2024-07-15 22:26:07.724523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.677 [2024-07-15 22:26:07.740520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.677 [2024-07-15 22:26:07.740538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.677 [2024-07-15 22:26:07.740547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.677 [2024-07-15 22:26:07.756620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.677 [2024-07-15 22:26:07.756638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.677 [2024-07-15 22:26:07.756644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.677 [2024-07-15 22:26:07.772117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.677 [2024-07-15 22:26:07.772139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.677 [2024-07-15 22:26:07.772145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.677 [2024-07-15 22:26:07.788554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.677 [2024-07-15 22:26:07.788573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.677 [2024-07-15 22:26:07.788579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.677 [2024-07-15 22:26:07.803577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.677 [2024-07-15 22:26:07.803595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.678 [2024-07-15 22:26:07.803601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.678 [2024-07-15 22:26:07.819491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.678 [2024-07-15 22:26:07.819509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.678 [2024-07-15 22:26:07.819515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.678 [2024-07-15 22:26:07.835585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.678 [2024-07-15 22:26:07.835603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.678 [2024-07-15 22:26:07.835609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.678 [2024-07-15 22:26:07.852029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.678 [2024-07-15 22:26:07.852047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.678 [2024-07-15 22:26:07.852054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.678 [2024-07-15 22:26:07.868163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.678 [2024-07-15 22:26:07.868181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.678 [2024-07-15 22:26:07.868187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.678 [2024-07-15 22:26:07.882348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.678 [2024-07-15 22:26:07.882370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.678 [2024-07-15 22:26:07.882377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.678 [2024-07-15 22:26:07.896707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.678 [2024-07-15 22:26:07.896726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.678 [2024-07-15 22:26:07.896732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.678 [2024-07-15 22:26:07.912332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.678 [2024-07-15 22:26:07.912350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.678 [2024-07-15 22:26:07.912356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.678 [2024-07-15 22:26:07.928819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.678 [2024-07-15 22:26:07.928838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.678 [2024-07-15 22:26:07.928844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.678 [2024-07-15 22:26:07.943231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.678 [2024-07-15 22:26:07.943258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.678 [2024-07-15 22:26:07.943264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.678 [2024-07-15 22:26:07.956536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.678 [2024-07-15 22:26:07.956554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.678 [2024-07-15 22:26:07.956560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.678 [2024-07-15 22:26:07.970974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.678 [2024-07-15 22:26:07.970993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.678 [2024-07-15 22:26:07.970999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.678 [2024-07-15 22:26:07.986324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.678 [2024-07-15 22:26:07.986343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.678 [2024-07-15 22:26:07.986349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.939 [2024-07-15 22:26:08.001819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.939 [2024-07-15 22:26:08.001837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.939 [2024-07-15 22:26:08.001844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.939 [2024-07-15 22:26:08.016368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.939 [2024-07-15 22:26:08.016385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.939 [2024-07-15 22:26:08.016391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.939 [2024-07-15 22:26:08.032506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.939 [2024-07-15 22:26:08.032524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.939 [2024-07-15 22:26:08.032531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.939 [2024-07-15 22:26:08.048917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.939 [2024-07-15 22:26:08.048935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.939 [2024-07-15 22:26:08.048940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.939 [2024-07-15 22:26:08.064992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.939 [2024-07-15 22:26:08.065010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.939 [2024-07-15 22:26:08.065015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.939 [2024-07-15 22:26:08.080327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.939 [2024-07-15 22:26:08.080344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.939 [2024-07-15 22:26:08.080350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.939 [2024-07-15 22:26:08.097390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.939 [2024-07-15 22:26:08.097408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.939 [2024-07-15 22:26:08.097414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.939 [2024-07-15 22:26:08.113088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.939 [2024-07-15 22:26:08.113105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.939 [2024-07-15 22:26:08.113112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.939 [2024-07-15 22:26:08.127338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.939 [2024-07-15 22:26:08.127356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.939 [2024-07-15 22:26:08.127362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.939 [2024-07-15 22:26:08.141939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.940 [2024-07-15 22:26:08.141960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.940 [2024-07-15 22:26:08.141967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.940 [2024-07-15 22:26:08.157944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.940 [2024-07-15 22:26:08.157963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.940 [2024-07-15 22:26:08.157969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.940 [2024-07-15 22:26:08.173675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.940 [2024-07-15 22:26:08.173693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.940 [2024-07-15 22:26:08.173699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.940 [2024-07-15 22:26:08.188415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.940 [2024-07-15 22:26:08.188432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.940 [2024-07-15 22:26:08.188438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.940 [2024-07-15 22:26:08.202743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.940 [2024-07-15 22:26:08.202761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.940 [2024-07-15 22:26:08.202768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.940 [2024-07-15 22:26:08.214806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.940 [2024-07-15 22:26:08.214824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.940 [2024-07-15 22:26:08.214831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.940 [2024-07-15 22:26:08.229407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.940 [2024-07-15 22:26:08.229424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.940 [2024-07-15 22:26:08.229431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.940 [2024-07-15 22:26:08.245864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.940 [2024-07-15 22:26:08.245882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.940 [2024-07-15 22:26:08.245888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.940 [2024-07-15 22:26:08.260492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:42.940 [2024-07-15 22:26:08.260510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.940 [2024-07-15 22:26:08.260516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.200 [2024-07-15 22:26:08.278111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:43.200 [2024-07-15 22:26:08.278133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.201 [2024-07-15 22:26:08.278139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.201 [2024-07-15 22:26:08.292671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:43.201 [2024-07-15 22:26:08.292689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.201 [2024-07-15 22:26:08.292695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.201 [2024-07-15 22:26:08.307476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17bdb80) 00:28:43.201 [2024-07-15 22:26:08.307494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.201 [2024-07-15 22:26:08.307500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.201 00:28:43.201 Latency(us) 00:28:43.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.201 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:43.201 nvme0n1 : 2.00 2006.05 250.76 0.00 0.00 7973.87 2375.68 17694.72 00:28:43.201 =================================================================================================================== 00:28:43.201 Total : 2006.05 250.76 0.00 0.00 7973.87 2375.68 17694.72 00:28:43.201 0 00:28:43.201 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:43.201 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:43.201 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:43.201 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:43.201 | .driver_specific 00:28:43.201 | .nvme_error 00:28:43.201 | .status_code 00:28:43.201 | .command_transient_transport_error' 00:28:43.201 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 129 > 0 )) 00:28:43.201 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2948307 00:28:43.201 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2948307 ']' 00:28:43.201 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2948307 00:28:43.201 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:43.201 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:43.201 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2948307 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2948307' 00:28:43.495 killing process with pid 2948307 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2948307 00:28:43.495 Received shutdown signal, test time was about 2.000000 seconds 00:28:43.495 00:28:43.495 Latency(us) 00:28:43.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.495 =================================================================================================================== 00:28:43.495 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2948307 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2948991 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2948991 /var/tmp/bperf.sock 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2948991 ']' 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:43.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:43.495 22:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.495 [2024-07-15 22:26:08.726839] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:28:43.495 [2024-07-15 22:26:08.726898] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2948991 ] 00:28:43.495 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.495 [2024-07-15 22:26:08.802522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.756 [2024-07-15 22:26:08.856273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.326 22:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:44.326 22:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:44.326 22:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.326 22:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.586 22:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:44.586 22:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.586 22:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.586 22:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.586 22:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.587 22:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.847 nvme0n1 00:28:44.848 22:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:44.848 22:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.848 22:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.848 22:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.848 22:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:44.848 22:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.848 Running I/O for 2 seconds... 00:28:44.848 [2024-07-15 22:26:10.075576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:44.848 [2024-07-15 22:26:10.075885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.848 [2024-07-15 22:26:10.075911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:44.848 [2024-07-15 22:26:10.087867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:44.848 [2024-07-15 22:26:10.088258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.848 [2024-07-15 22:26:10.088277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:44.848 [2024-07-15 22:26:10.100159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:44.848 [2024-07-15 22:26:10.100622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.848 [2024-07-15 22:26:10.100638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:44.848 [2024-07-15 22:26:10.112342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:44.848 [2024-07-15 22:26:10.112614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.848 [2024-07-15 22:26:10.112630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:44.848 [2024-07-15 22:26:10.124573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:44.848 [2024-07-15 22:26:10.124844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.848 [2024-07-15 22:26:10.124858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:44.848 [2024-07-15 22:26:10.136786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:44.848 [2024-07-15 22:26:10.137171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.848 [2024-07-15 22:26:10.137187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:44.848 [2024-07-15 22:26:10.148958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:44.848 [2024-07-15 22:26:10.149434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.848 [2024-07-15 22:26:10.149449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:44.848 [2024-07-15 22:26:10.161148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:44.848 [2024-07-15 22:26:10.161419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.848 [2024-07-15 22:26:10.161436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.108 [2024-07-15 22:26:10.173367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.108 [2024-07-15 22:26:10.173842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.108 [2024-07-15 22:26:10.173857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.108 [2024-07-15 22:26:10.185514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.108 [2024-07-15 22:26:10.185938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.108 [2024-07-15 22:26:10.185953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.108 [2024-07-15 22:26:10.197661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.108 [2024-07-15 22:26:10.198070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.108 [2024-07-15 22:26:10.198085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.108 [2024-07-15 22:26:10.209824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.108 [2024-07-15 22:26:10.210096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.108 [2024-07-15 22:26:10.210112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.221950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.222433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.222448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.234096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.234477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.234493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.246267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.246653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.246669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.258430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.258678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.258696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.270579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.271026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.271042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.282718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.282965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.282981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.294981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.295341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.295357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.307121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.307588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.307603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.319281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.319693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.319709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.331371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.331757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.331772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.343510] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.343770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.343786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.355635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.356046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.356061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.367787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.368149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.368165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.379962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.380218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.380233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.392069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.392465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.392480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.404220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.404668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.404683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.416336] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.416736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.416751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.109 [2024-07-15 22:26:10.428451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.109 [2024-07-15 22:26:10.428730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.109 [2024-07-15 22:26:10.428745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.370 [2024-07-15 22:26:10.440616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.370 [2024-07-15 22:26:10.440899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.370 [2024-07-15 22:26:10.440915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.370 [2024-07-15 22:26:10.452769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.370 [2024-07-15 22:26:10.453022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.370 [2024-07-15 22:26:10.453038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.370 [2024-07-15 22:26:10.464856] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.370 [2024-07-15 22:26:10.465276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.370 [2024-07-15 22:26:10.465291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.370 [2024-07-15 22:26:10.476939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.370 [2024-07-15 22:26:10.477394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.370 [2024-07-15 22:26:10.477410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.370 [2024-07-15 22:26:10.489109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.370 [2024-07-15 22:26:10.489450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.370 [2024-07-15 22:26:10.489465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.370 [2024-07-15 22:26:10.501210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.370 [2024-07-15 22:26:10.501614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.370 [2024-07-15 22:26:10.501629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.370 [2024-07-15 22:26:10.513379] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.370 [2024-07-15 22:26:10.513840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.370 [2024-07-15 22:26:10.513855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.370 [2024-07-15 22:26:10.525493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.370 [2024-07-15 22:26:10.525854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.370 [2024-07-15 22:26:10.525870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.370 [2024-07-15 22:26:10.537622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.370 [2024-07-15 22:26:10.537886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.370 [2024-07-15 22:26:10.537902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.370 [2024-07-15 22:26:10.549731] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.370 [2024-07-15 22:26:10.550012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.370 [2024-07-15 22:26:10.550028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.370 [2024-07-15 22:26:10.561883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.370 [2024-07-15 22:26:10.562241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.370 [2024-07-15 22:26:10.562257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.370 [2024-07-15 22:26:10.574019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.370 [2024-07-15 22:26:10.574434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.370 [2024-07-15 22:26:10.574452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.370 [2024-07-15 22:26:10.586149] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.370 [2024-07-15 22:26:10.586547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.370 [2024-07-15 22:26:10.586563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.370 [2024-07-15 22:26:10.598267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.371 [2024-07-15 22:26:10.598649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.371 [2024-07-15 22:26:10.598664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.371 [2024-07-15 22:26:10.610420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.371 [2024-07-15 22:26:10.610676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.371 [2024-07-15 22:26:10.610691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.371 [2024-07-15 22:26:10.622529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.371 [2024-07-15 22:26:10.622984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.371 [2024-07-15 22:26:10.622999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.371 [2024-07-15 22:26:10.634637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.371 [2024-07-15 22:26:10.634902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.371 [2024-07-15 22:26:10.634917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.371 [2024-07-15 22:26:10.646732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.371 [2024-07-15 22:26:10.647085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.371 [2024-07-15 22:26:10.647101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.371 [2024-07-15 22:26:10.658883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.371 [2024-07-15 22:26:10.659151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.371 [2024-07-15 22:26:10.659167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.371 [2024-07-15 22:26:10.670995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.371 [2024-07-15 22:26:10.671379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.371 [2024-07-15 22:26:10.671395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.371 [2024-07-15 22:26:10.683238] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.371 [2024-07-15 22:26:10.683510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.371 [2024-07-15 22:26:10.683526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.632 [2024-07-15 22:26:10.695420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.632 [2024-07-15 22:26:10.695845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.632 [2024-07-15 22:26:10.695860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.632 [2024-07-15 22:26:10.707631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.632 [2024-07-15 22:26:10.708171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.708186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.719762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.720021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.720036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.731874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.732125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.732140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.744114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.744590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.744605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.756243] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.756545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.756560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.768391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.768643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.768659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.780520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.780784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.780799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.792589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.792940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.792956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.804738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.805030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.805045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.816808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.817192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.817208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.828992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.829443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.829458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.841084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.841538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.841553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.853180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.853441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.853455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.865303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.865569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.865585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.877470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.877763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.877777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.889582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.889924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.889939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.901686] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.902112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.902131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.913805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.914276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.914291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.925941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.926206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.926221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.938006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.938360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.938376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.633 [2024-07-15 22:26:10.950133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.633 [2024-07-15 22:26:10.950508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.633 [2024-07-15 22:26:10.950522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:10.962281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:10.962607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:10.962622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:10.974415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:10.974779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:10.974794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:10.986517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:10.986784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:10.986806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:10.998667] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:10.998927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:10.998952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.010878] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.011360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.011375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.023046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.023423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.023438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.035373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.035743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.035759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.047494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.047972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.047986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.059605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.059954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.059969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.071728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.072120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.072137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.083828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.084238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.084252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.095957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.096355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.096370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.108127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.108596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.108612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.120277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.120681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.120696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.132381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.132659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.132674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.144524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.144877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.144892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.156663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.157047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.157062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.168813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.169231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.169246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.180945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.181337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.181353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.193104] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.193563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.193579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.205249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.205621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.205635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:45.895 [2024-07-15 22:26:11.217357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:45.895 [2024-07-15 22:26:11.217755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.895 [2024-07-15 22:26:11.217770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.156 [2024-07-15 22:26:11.229532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.156 [2024-07-15 22:26:11.229803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.156 [2024-07-15 22:26:11.229818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.156 [2024-07-15 22:26:11.241683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.242130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.242145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.253831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.254224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.254238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.265961] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.266316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.266331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.278073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.278550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.278565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.290211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.290561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.290577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.302393] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.302778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.302793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.314572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.314919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.314936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.326707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.327116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.327136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.338925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.339293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.339308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.351032] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.351439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.351454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.363256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.363560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.363576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.375375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.375790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.375805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.387542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.387956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.387971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.399698] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.400098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.400113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.411803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.412204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.412219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.423945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.424206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.424227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.436070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.436358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.436374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.448206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.448670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.448686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.460298] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.460668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.460683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.157 [2024-07-15 22:26:11.472541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.157 [2024-07-15 22:26:11.472807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.157 [2024-07-15 22:26:11.472822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.418 [2024-07-15 22:26:11.484683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.418 [2024-07-15 22:26:11.484962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.418 [2024-07-15 22:26:11.484983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.418 [2024-07-15 22:26:11.496813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.418 [2024-07-15 22:26:11.497207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.418 [2024-07-15 22:26:11.497222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.418 [2024-07-15 22:26:11.508908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.418 [2024-07-15 22:26:11.509269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.418 [2024-07-15 22:26:11.509285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.418 [2024-07-15 22:26:11.520990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.418 [2024-07-15 22:26:11.521414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.418 [2024-07-15 22:26:11.521430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.418 [2024-07-15 22:26:11.533186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.418 [2024-07-15 22:26:11.533620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.418 [2024-07-15 22:26:11.533636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.418 [2024-07-15 22:26:11.545297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.418 [2024-07-15 22:26:11.545652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.418 [2024-07-15 22:26:11.545668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.418 [2024-07-15 22:26:11.557464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.418 [2024-07-15 22:26:11.557858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.419 [2024-07-15 22:26:11.557874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.419 [2024-07-15 22:26:11.569664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.419 [2024-07-15 22:26:11.570085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.419 [2024-07-15 22:26:11.570101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.419 [2024-07-15 22:26:11.581730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.419 [2024-07-15 22:26:11.582147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.419 [2024-07-15 22:26:11.582163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.419 [2024-07-15 22:26:11.593841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.419 [2024-07-15 22:26:11.594239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.419 [2024-07-15 22:26:11.594255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.419 [2024-07-15 22:26:11.606010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.419 [2024-07-15 22:26:11.606365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.419 [2024-07-15 22:26:11.606381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.419 [2024-07-15 22:26:11.618129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.419 [2024-07-15 22:26:11.618511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.419 [2024-07-15 22:26:11.618526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.419 [2024-07-15 22:26:11.630297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.419 [2024-07-15 22:26:11.630566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.419 [2024-07-15 22:26:11.630584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.419 [2024-07-15 22:26:11.642434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.419 [2024-07-15 22:26:11.642794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.419 [2024-07-15 22:26:11.642810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.419 [2024-07-15 22:26:11.654559] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.419 [2024-07-15 22:26:11.655026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.419 [2024-07-15 22:26:11.655041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.419 [2024-07-15 22:26:11.666668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.419 [2024-07-15 22:26:11.667080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.419 [2024-07-15 22:26:11.667096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.419 [2024-07-15 22:26:11.678775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.419 [2024-07-15 22:26:11.679168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.419 [2024-07-15 22:26:11.679183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.419 [2024-07-15 22:26:11.690917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.419 [2024-07-15 22:26:11.691182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.419 [2024-07-15 22:26:11.691198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.419 [2024-07-15 22:26:11.703026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.419 [2024-07-15 22:26:11.703306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.419 [2024-07-15 22:26:11.703322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.419 [2024-07-15 22:26:11.715110] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.419 [2024-07-15 22:26:11.715533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.419 [2024-07-15 22:26:11.715548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.419 [2024-07-15 22:26:11.727248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.419 [2024-07-15 22:26:11.727594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.419 [2024-07-15 22:26:11.727610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.419 [2024-07-15 22:26:11.739387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.419 [2024-07-15 22:26:11.739824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.419 [2024-07-15 22:26:11.739839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.680 [2024-07-15 22:26:11.751551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.751904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.751920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.763676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.764031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.764046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.775794] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.776045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.776061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.787909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.788371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.788386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.800057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.800414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.800430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.812111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.812477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.812493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.824253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.824536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.824551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.836384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.836821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.836836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.848472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.848846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.848862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.860570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.860972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.860987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.872742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.873160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.873175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.884819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.885228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.885243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.896987] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.897242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.897257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.909099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.909490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.909505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.921305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.921563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.921578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.933444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.933837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.933852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.945539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.945800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.945818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.957671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.958155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.958170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.969833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.970247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.970263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.981926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.982323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.982338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.681 [2024-07-15 22:26:11.994060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.681 [2024-07-15 22:26:11.994492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.681 [2024-07-15 22:26:11.994507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.942 [2024-07-15 22:26:12.006147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.942 [2024-07-15 22:26:12.006404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.942 [2024-07-15 22:26:12.006418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.942 [2024-07-15 22:26:12.018278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.942 [2024-07-15 22:26:12.018535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.942 [2024-07-15 22:26:12.018550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.942 [2024-07-15 22:26:12.030541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.942 [2024-07-15 22:26:12.030813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.942 [2024-07-15 22:26:12.030828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.942 [2024-07-15 22:26:12.042648] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.942 [2024-07-15 22:26:12.042910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.942 [2024-07-15 22:26:12.042926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.942 [2024-07-15 22:26:12.054761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327aa0) with pdu=0x2000190fe2e8 00:28:46.942 [2024-07-15 22:26:12.055196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.942 [2024-07-15 22:26:12.055212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:46.942 00:28:46.942 Latency(us) 00:28:46.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.942 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:46.942 nvme0n1 : 2.01 20937.67 81.79 0.00 0.00 6101.75 5597.87 17039.36 00:28:46.942 =================================================================================================================== 00:28:46.942 Total : 20937.67 81.79 0.00 0.00 6101.75 5597.87 17039.36 00:28:46.942 0 00:28:46.942 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:46.942 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:46.942 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:46.942 | .driver_specific 00:28:46.942 | .nvme_error 00:28:46.942 | .status_code 00:28:46.942 | .command_transient_transport_error' 00:28:46.942 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:46.942 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:28:46.942 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2948991 00:28:46.942 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2948991 ']' 00:28:46.942 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2948991 00:28:46.942 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:46.942 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:46.942 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2948991 00:28:47.203 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:47.203 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:47.203 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2948991' 00:28:47.203 killing process with pid 2948991 00:28:47.203 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2948991 00:28:47.203 Received shutdown signal, test time was about 2.000000 seconds 00:28:47.203 00:28:47.203 Latency(us) 00:28:47.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.203 =================================================================================================================== 00:28:47.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:47.203 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2948991 00:28:47.203 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:47.203 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:47.203 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:47.203 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:47.204 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:47.204 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2949673 00:28:47.204 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2949673 /var/tmp/bperf.sock 00:28:47.204 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2949673 ']' 00:28:47.204 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:47.204 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:47.204 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:47.204 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:47.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:47.204 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:47.204 22:26:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.204 [2024-07-15 22:26:12.470371] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:28:47.204 [2024-07-15 22:26:12.470426] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2949673 ] 00:28:47.204 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:47.204 Zero copy mechanism will not be used. 00:28:47.204 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.465 [2024-07-15 22:26:12.545622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.465 [2024-07-15 22:26:12.597075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.035 22:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:48.035 22:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:48.035 22:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:48.035 22:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:48.295 22:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:48.295 22:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.295 22:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:48.295 22:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.295 22:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.295 22:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.556 nvme0n1 00:28:48.556 22:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:48.556 22:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.556 22:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:48.556 22:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.556 22:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:48.556 22:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:48.556 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:48.556 Zero copy mechanism will not be used. 00:28:48.556 Running I/O for 2 seconds... 00:28:48.556 [2024-07-15 22:26:13.773078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.556 [2024-07-15 22:26:13.773325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.556 [2024-07-15 22:26:13.773352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.556 [2024-07-15 22:26:13.783182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.556 [2024-07-15 22:26:13.783334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.556 [2024-07-15 22:26:13.783353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.556 [2024-07-15 22:26:13.792411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.556 [2024-07-15 22:26:13.792640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.556 [2024-07-15 22:26:13.792657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.556 [2024-07-15 22:26:13.801394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.556 [2024-07-15 22:26:13.801715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.556 [2024-07-15 22:26:13.801733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.556 [2024-07-15 22:26:13.811534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.556 [2024-07-15 22:26:13.811852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.556 [2024-07-15 22:26:13.811869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.556 [2024-07-15 22:26:13.821371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.556 [2024-07-15 22:26:13.821691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.556 [2024-07-15 22:26:13.821707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.556 [2024-07-15 22:26:13.832292] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.556 [2024-07-15 22:26:13.832628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.556 [2024-07-15 22:26:13.832645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.556 [2024-07-15 22:26:13.843402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.556 [2024-07-15 22:26:13.843740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.556 [2024-07-15 22:26:13.843756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.556 [2024-07-15 22:26:13.854148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.556 [2024-07-15 22:26:13.854427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.556 [2024-07-15 22:26:13.854443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.556 [2024-07-15 22:26:13.864680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.556 [2024-07-15 22:26:13.864771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.556 [2024-07-15 22:26:13.864786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.556 [2024-07-15 22:26:13.874566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.556 [2024-07-15 22:26:13.874925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.556 [2024-07-15 22:26:13.874942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:13.886189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:13.886584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:13.886600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:13.897665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:13.898110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:13.898129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:13.908798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:13.909113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:13.909133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:13.918416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:13.918643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:13.918658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:13.928562] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:13.928649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:13.928663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:13.939650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:13.939923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:13.939944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:13.949734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:13.950060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:13.950080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:13.959850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:13.960178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:13.960195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:13.970023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:13.970297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:13.970314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:13.980960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:13.981301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:13.981318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:13.990820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:13.990993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:13.991007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:14.002917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:14.003135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:14.003150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:14.012714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:14.013018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:14.013035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:14.023267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:14.023622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:14.023638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:14.033485] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:14.033856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:14.033872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:14.043565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:14.043771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:14.043786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:14.052610] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:14.052859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:14.052874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:14.062186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:14.062396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:14.062412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:14.072957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:14.073363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:14.073380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:14.082491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:14.082866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:14.082883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:14.090231] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:14.090461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:14.090477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:14.098913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.818 [2024-07-15 22:26:14.099129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.818 [2024-07-15 22:26:14.099144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.818 [2024-07-15 22:26:14.108460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.819 [2024-07-15 22:26:14.108828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.819 [2024-07-15 22:26:14.108844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.819 [2024-07-15 22:26:14.118618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.819 [2024-07-15 22:26:14.119042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.819 [2024-07-15 22:26:14.119061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.819 [2024-07-15 22:26:14.129095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.819 [2024-07-15 22:26:14.129323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.819 [2024-07-15 22:26:14.129339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.819 [2024-07-15 22:26:14.136836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:48.819 [2024-07-15 22:26:14.137159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.819 [2024-07-15 22:26:14.137176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.080 [2024-07-15 22:26:14.146975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.080 [2024-07-15 22:26:14.147397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.080 [2024-07-15 22:26:14.147413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.080 [2024-07-15 22:26:14.155636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.080 [2024-07-15 22:26:14.156044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.080 [2024-07-15 22:26:14.156059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.080 [2024-07-15 22:26:14.165780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.080 [2024-07-15 22:26:14.166167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.080 [2024-07-15 22:26:14.166183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.080 [2024-07-15 22:26:14.175865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.080 [2024-07-15 22:26:14.176073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.080 [2024-07-15 22:26:14.176088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.080 [2024-07-15 22:26:14.184963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.080 [2024-07-15 22:26:14.185186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.080 [2024-07-15 22:26:14.185201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.080 [2024-07-15 22:26:14.194046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.080 [2024-07-15 22:26:14.194414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.080 [2024-07-15 22:26:14.194430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.080 [2024-07-15 22:26:14.203282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.080 [2024-07-15 22:26:14.203645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.080 [2024-07-15 22:26:14.203661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.080 [2024-07-15 22:26:14.212524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.080 [2024-07-15 22:26:14.212837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.080 [2024-07-15 22:26:14.212852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.080 [2024-07-15 22:26:14.223183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.080 [2024-07-15 22:26:14.223522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.223538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.233837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.234233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.234249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.243584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.243789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.243805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.253685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.253934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.253957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.263636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.263968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.263984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.274552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.274760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.274775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.283625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.283995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.284011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.291575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.291781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.291796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.298734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.299095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.299111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.307975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.308184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.308200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.314440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.314649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.314664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.322843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.323162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.323178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.331761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.332126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.332142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.339933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.340141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.340156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.348690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.349028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.349044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.357442] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.357646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.357667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.365471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.365891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.365907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.374816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.375211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.375227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.384545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.384756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.384772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.081 [2024-07-15 22:26:14.395320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.081 [2024-07-15 22:26:14.395675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.081 [2024-07-15 22:26:14.395691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.343 [2024-07-15 22:26:14.405292] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.343 [2024-07-15 22:26:14.405525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.343 [2024-07-15 22:26:14.405540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.343 [2024-07-15 22:26:14.415141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.343 [2024-07-15 22:26:14.415463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.343 [2024-07-15 22:26:14.415479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.343 [2024-07-15 22:26:14.423872] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.343 [2024-07-15 22:26:14.424076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.343 [2024-07-15 22:26:14.424092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.343 [2024-07-15 22:26:14.432634] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.343 [2024-07-15 22:26:14.432911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.343 [2024-07-15 22:26:14.432927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.343 [2024-07-15 22:26:14.441432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.343 [2024-07-15 22:26:14.441761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.343 [2024-07-15 22:26:14.441777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.343 [2024-07-15 22:26:14.450727] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.343 [2024-07-15 22:26:14.450937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.450952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.460286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.460670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.460686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.469277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.469631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.469647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.477431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.477727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.477743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.485406] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.485611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.485626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.492631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.493038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.493054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.498278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.498481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.498497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.506933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.507146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.507161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.516374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.516764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.516780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.525558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.525978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.525994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.536568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.536771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.536787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.544739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.545060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.545076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.553398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.553603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.553618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.562913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.563117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.563137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.572244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.572583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.572599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.581743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.582068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.582085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.592211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.592445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.592463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.601770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.602060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.602076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.612598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.612982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.612998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.621526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.621850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.621866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.629542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.629952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.629969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.637785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.638064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.638080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.645616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.645819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.645835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.653456] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.653709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.653724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.344 [2024-07-15 22:26:14.661915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.344 [2024-07-15 22:26:14.662109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.344 [2024-07-15 22:26:14.662129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.671062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.671453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.671468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.679227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.679432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.679448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.687103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.687393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.687409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.694347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.694709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.694725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.703188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.703454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.703469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.710131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.710492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.710509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.720237] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.720457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.720473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.729603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.729864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.729880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.738569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.738924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.738941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.747259] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.747477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.747492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.757451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.757803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.757819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.767046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.767430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.767447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.778185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.778525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.778541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.787709] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.788154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.788170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.798086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.798403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.798419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.806909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.807118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.807138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.816210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.816555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.816571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.825930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.826261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.826281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.833558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.833803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.833818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.842806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.843213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.843229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.852776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.853101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.853117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.863425] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.863769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.863785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.874014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.607 [2024-07-15 22:26:14.874340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.607 [2024-07-15 22:26:14.874357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.607 [2024-07-15 22:26:14.885848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.608 [2024-07-15 22:26:14.886241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.608 [2024-07-15 22:26:14.886258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.608 [2024-07-15 22:26:14.897529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.608 [2024-07-15 22:26:14.897865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.608 [2024-07-15 22:26:14.897881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.608 [2024-07-15 22:26:14.908962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.608 [2024-07-15 22:26:14.909403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.608 [2024-07-15 22:26:14.909419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.608 [2024-07-15 22:26:14.919786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.608 [2024-07-15 22:26:14.920006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.608 [2024-07-15 22:26:14.920020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:14.931203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:14.931560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:14.931577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:14.942250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:14.942629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:14.942645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:14.954280] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:14.954687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:14.954703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:14.965916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:14.966299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:14.966317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:14.977055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:14.977428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:14.977444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:14.989193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:14.989572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:14.989588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:15.000598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:15.000873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:15.000889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:15.012129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:15.012406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:15.012426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:15.023088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:15.023310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:15.023326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:15.033842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:15.034250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:15.034266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:15.045325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:15.045534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:15.045550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:15.056755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:15.057212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:15.057228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:15.069110] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:15.069336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:15.069351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:15.081102] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:15.081441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:15.081457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:15.092684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:15.093019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:15.093036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:15.104525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:15.104895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:15.104911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:15.115710] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:15.116063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:15.116079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:15.126916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:15.127197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:15.127213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.871 [2024-07-15 22:26:15.138723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.871 [2024-07-15 22:26:15.139093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.871 [2024-07-15 22:26:15.139109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.872 [2024-07-15 22:26:15.149931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.872 [2024-07-15 22:26:15.150215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.872 [2024-07-15 22:26:15.150231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.872 [2024-07-15 22:26:15.161314] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.872 [2024-07-15 22:26:15.161566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.872 [2024-07-15 22:26:15.161582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.872 [2024-07-15 22:26:15.172701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.872 [2024-07-15 22:26:15.173049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.872 [2024-07-15 22:26:15.173065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.872 [2024-07-15 22:26:15.183676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:49.872 [2024-07-15 22:26:15.183922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.872 [2024-07-15 22:26:15.183937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.133 [2024-07-15 22:26:15.195429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.133 [2024-07-15 22:26:15.195743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.133 [2024-07-15 22:26:15.195758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.133 [2024-07-15 22:26:15.206660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.133 [2024-07-15 22:26:15.206944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.133 [2024-07-15 22:26:15.206960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.133 [2024-07-15 22:26:15.217215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.133 [2024-07-15 22:26:15.217533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.133 [2024-07-15 22:26:15.217549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.133 [2024-07-15 22:26:15.225914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.133 [2024-07-15 22:26:15.226350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.133 [2024-07-15 22:26:15.226366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.133 [2024-07-15 22:26:15.237253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.133 [2024-07-15 22:26:15.237608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.133 [2024-07-15 22:26:15.237623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.133 [2024-07-15 22:26:15.247181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.133 [2024-07-15 22:26:15.247473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.133 [2024-07-15 22:26:15.247489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.133 [2024-07-15 22:26:15.255392] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.133 [2024-07-15 22:26:15.255730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.133 [2024-07-15 22:26:15.255746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.133 [2024-07-15 22:26:15.264247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.133 [2024-07-15 22:26:15.264589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.133 [2024-07-15 22:26:15.264605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.133 [2024-07-15 22:26:15.273887] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.133 [2024-07-15 22:26:15.274259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.133 [2024-07-15 22:26:15.274275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.133 [2024-07-15 22:26:15.284087] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.133 [2024-07-15 22:26:15.284437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.284453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.294520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.294853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.294871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.304413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.304638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.304653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.312532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.312893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.312910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.322194] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.322400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.322416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.332308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.332642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.332658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.342196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.342442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.342458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.351271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.351676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.351692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.359276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.359629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.359645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.368546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.368898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.368914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.377584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.377985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.378001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.385809] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.386198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.386214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.395206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.395525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.395541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.404618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.404866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.404882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.414117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.414337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.414353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.423414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.423636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.423652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.433358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.433728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.433744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.443457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.443817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.443833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.134 [2024-07-15 22:26:15.453371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.134 [2024-07-15 22:26:15.453770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.134 [2024-07-15 22:26:15.453787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.395 [2024-07-15 22:26:15.460003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.395 [2024-07-15 22:26:15.460320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.395 [2024-07-15 22:26:15.460337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.395 [2024-07-15 22:26:15.470465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.395 [2024-07-15 22:26:15.470668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.395 [2024-07-15 22:26:15.470684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.395 [2024-07-15 22:26:15.478720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.395 [2024-07-15 22:26:15.479040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.395 [2024-07-15 22:26:15.479056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.395 [2024-07-15 22:26:15.486846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.395 [2024-07-15 22:26:15.487080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.395 [2024-07-15 22:26:15.487095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.395 [2024-07-15 22:26:15.495189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.395 [2024-07-15 22:26:15.495399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.395 [2024-07-15 22:26:15.495414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.395 [2024-07-15 22:26:15.503685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.395 [2024-07-15 22:26:15.503944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.395 [2024-07-15 22:26:15.503959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.395 [2024-07-15 22:26:15.512521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.395 [2024-07-15 22:26:15.512766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.395 [2024-07-15 22:26:15.512782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.395 [2024-07-15 22:26:15.521601] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.395 [2024-07-15 22:26:15.521935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.395 [2024-07-15 22:26:15.521950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.395 [2024-07-15 22:26:15.529928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.530242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.530261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.538026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.538352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.538368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.548181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.548427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.548442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.555333] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.555538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.555554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.561643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.561866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.561881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.568185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.568415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.568430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.576765] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.577010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.577025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.585375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.585707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.585724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.593869] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.594073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.594089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.601683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.601888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.601904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.612358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.612588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.612603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.620414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.620618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.620633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.627798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.628006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.628022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.637307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.637664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.637680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.647079] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.647325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.647340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.659414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.659801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.659818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.670745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.671153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.671168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.681436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.681852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.681872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.692187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.692459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.692475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.702333] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.702684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.702700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.396 [2024-07-15 22:26:15.713277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.396 [2024-07-15 22:26:15.713654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.396 [2024-07-15 22:26:15.713671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.657 [2024-07-15 22:26:15.725674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.657 [2024-07-15 22:26:15.726066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.657 [2024-07-15 22:26:15.726082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.657 [2024-07-15 22:26:15.736546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.657 [2024-07-15 22:26:15.736824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.657 [2024-07-15 22:26:15.736839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.657 [2024-07-15 22:26:15.748086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.657 [2024-07-15 22:26:15.748443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.657 [2024-07-15 22:26:15.748459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.657 [2024-07-15 22:26:15.758516] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241cca0) with pdu=0x2000190fef90 00:28:50.657 [2024-07-15 22:26:15.758878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.657 [2024-07-15 22:26:15.758893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.657 00:28:50.657 Latency(us) 00:28:50.657 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.657 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:50.658 nvme0n1 : 2.00 3195.72 399.46 0.00 0.00 4998.35 2430.29 14090.24 00:28:50.658 =================================================================================================================== 00:28:50.658 Total : 3195.72 399.46 0.00 0.00 4998.35 2430.29 14090.24 00:28:50.658 0 00:28:50.658 22:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:50.658 22:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:50.658 22:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:50.658 | .driver_specific 00:28:50.658 | .nvme_error 00:28:50.658 | .status_code 00:28:50.658 | .command_transient_transport_error' 00:28:50.658 22:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:50.658 22:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 206 > 0 )) 00:28:50.658 22:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2949673 00:28:50.658 22:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2949673 ']' 00:28:50.658 22:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2949673 00:28:50.658 22:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:50.658 22:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:50.658 22:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2949673 00:28:50.919 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:50.919 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:50.919 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2949673' 00:28:50.919 killing process with pid 2949673 00:28:50.919 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2949673 00:28:50.919 Received shutdown signal, test time was about 2.000000 seconds 00:28:50.919 00:28:50.919 Latency(us) 00:28:50.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.919 =================================================================================================================== 00:28:50.919 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.919 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2949673 00:28:50.919 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2947274 00:28:50.919 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2947274 ']' 00:28:50.919 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2947274 00:28:50.919 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:50.919 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:50.919 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2947274 00:28:50.919 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:50.919 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:50.919 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2947274' 00:28:50.919 killing process with pid 2947274 00:28:50.919 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2947274 00:28:50.919 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2947274 00:28:51.180 00:28:51.180 real 0m16.086s 00:28:51.180 user 0m31.642s 00:28:51.180 sys 0m3.250s 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:51.180 ************************************ 00:28:51.180 END TEST nvmf_digest_error 00:28:51.180 ************************************ 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:51.180 rmmod nvme_tcp 00:28:51.180 rmmod nvme_fabrics 00:28:51.180 rmmod nvme_keyring 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2947274 ']' 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2947274 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2947274 ']' 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2947274 00:28:51.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2947274) - No such process 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2947274 is not found' 00:28:51.180 Process with pid 2947274 is not found 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:51.180 22:26:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.728 22:26:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:53.728 00:28:53.728 real 0m41.309s 00:28:53.728 user 1m5.291s 00:28:53.728 sys 0m11.676s 00:28:53.728 22:26:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:53.728 22:26:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:53.728 ************************************ 00:28:53.728 END TEST nvmf_digest 00:28:53.728 ************************************ 00:28:53.728 22:26:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:53.728 22:26:18 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:28:53.728 22:26:18 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:28:53.728 22:26:18 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:28:53.728 22:26:18 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:53.728 22:26:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:53.728 22:26:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:53.728 22:26:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:53.728 ************************************ 00:28:53.728 START TEST nvmf_bdevperf 00:28:53.728 ************************************ 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:53.728 * Looking for test storage... 00:28:53.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:53.728 22:26:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.318 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:00.319 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:00.319 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:00.319 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:00.319 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.319 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.580 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.580 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.580 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:00.580 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.580 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.580 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.580 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:00.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:29:00.580 00:29:00.580 --- 10.0.0.2 ping statistics --- 00:29:00.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.580 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:29:00.580 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.440 ms 00:29:00.580 00:29:00.580 --- 10.0.0.1 ping statistics --- 00:29:00.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.580 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:29:00.580 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.580 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2954536 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2954536 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2954536 ']' 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:00.842 22:26:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.842 [2024-07-15 22:26:26.017512] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:29:00.842 [2024-07-15 22:26:26.017579] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.842 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.842 [2024-07-15 22:26:26.106534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:01.103 [2024-07-15 22:26:26.201627] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.103 [2024-07-15 22:26:26.201681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.103 [2024-07-15 22:26:26.201689] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.103 [2024-07-15 22:26:26.201696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.103 [2024-07-15 22:26:26.201702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:01.103 [2024-07-15 22:26:26.202000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:01.103 [2024-07-15 22:26:26.202176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:01.103 [2024-07-15 22:26:26.202218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.674 [2024-07-15 22:26:26.835735] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.674 Malloc0 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.674 [2024-07-15 22:26:26.898355] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:01.674 { 00:29:01.674 "params": { 00:29:01.674 "name": "Nvme$subsystem", 00:29:01.674 "trtype": "$TEST_TRANSPORT", 00:29:01.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.674 "adrfam": "ipv4", 00:29:01.674 "trsvcid": "$NVMF_PORT", 00:29:01.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.674 "hdgst": ${hdgst:-false}, 00:29:01.674 "ddgst": ${ddgst:-false} 00:29:01.674 }, 00:29:01.674 "method": "bdev_nvme_attach_controller" 00:29:01.674 } 00:29:01.674 EOF 00:29:01.674 )") 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:01.674 22:26:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:01.674 "params": { 00:29:01.674 "name": "Nvme1", 00:29:01.674 "trtype": "tcp", 00:29:01.674 "traddr": "10.0.0.2", 00:29:01.674 "adrfam": "ipv4", 00:29:01.674 "trsvcid": "4420", 00:29:01.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:01.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:01.674 "hdgst": false, 00:29:01.674 "ddgst": false 00:29:01.674 }, 00:29:01.674 "method": "bdev_nvme_attach_controller" 00:29:01.674 }' 00:29:01.674 [2024-07-15 22:26:26.952410] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:29:01.674 [2024-07-15 22:26:26.952457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2954721 ] 00:29:01.674 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.935 [2024-07-15 22:26:27.009894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.935 [2024-07-15 22:26:27.074373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.195 Running I/O for 1 seconds... 00:29:03.138 00:29:03.138 Latency(us) 00:29:03.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.138 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:03.138 Verification LBA range: start 0x0 length 0x4000 00:29:03.138 Nvme1n1 : 1.01 10129.88 39.57 0.00 0.00 12574.43 2512.21 13871.79 00:29:03.138 =================================================================================================================== 00:29:03.138 Total : 10129.88 39.57 0.00 0.00 12574.43 2512.21 13871.79 00:29:03.399 22:26:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2955050 00:29:03.399 22:26:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:03.399 22:26:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:03.399 22:26:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:03.399 22:26:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:03.399 22:26:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:03.399 22:26:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:03.399 22:26:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:03.399 { 00:29:03.399 "params": { 00:29:03.399 "name": "Nvme$subsystem", 00:29:03.399 "trtype": "$TEST_TRANSPORT", 00:29:03.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:03.399 "adrfam": "ipv4", 00:29:03.399 "trsvcid": "$NVMF_PORT", 00:29:03.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:03.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:03.399 "hdgst": ${hdgst:-false}, 00:29:03.399 "ddgst": ${ddgst:-false} 00:29:03.399 }, 00:29:03.399 "method": "bdev_nvme_attach_controller" 00:29:03.399 } 00:29:03.399 EOF 00:29:03.399 )") 00:29:03.399 22:26:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:03.399 22:26:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:03.399 22:26:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:03.399 22:26:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:03.399 "params": { 00:29:03.399 "name": "Nvme1", 00:29:03.399 "trtype": "tcp", 00:29:03.399 "traddr": "10.0.0.2", 00:29:03.399 "adrfam": "ipv4", 00:29:03.399 "trsvcid": "4420", 00:29:03.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.399 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:03.399 "hdgst": false, 00:29:03.399 "ddgst": false 00:29:03.399 }, 00:29:03.399 "method": "bdev_nvme_attach_controller" 00:29:03.399 }' 00:29:03.399 [2024-07-15 22:26:28.539394] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:29:03.399 [2024-07-15 22:26:28.539450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2955050 ] 00:29:03.399 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.399 [2024-07-15 22:26:28.598473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.399 [2024-07-15 22:26:28.662061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.671 Running I/O for 15 seconds... 00:29:06.259 22:26:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2954536 00:29:06.259 22:26:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:06.259 [2024-07-15 22:26:31.504750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.259 [2024-07-15 22:26:31.504792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.259 [2024-07-15 22:26:31.504812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.259 [2024-07-15 22:26:31.504822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.259 [2024-07-15 22:26:31.504832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.259 [2024-07-15 22:26:31.504840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.259 [2024-07-15 22:26:31.504849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.259 [2024-07-15 22:26:31.504857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.259 [2024-07-15 22:26:31.504867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.259 [2024-07-15 22:26:31.504876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.259 [2024-07-15 22:26:31.504886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.259 [2024-07-15 22:26:31.504896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.259 [2024-07-15 22:26:31.504908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.259 [2024-07-15 22:26:31.504922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.259 [2024-07-15 22:26:31.504933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.259 [2024-07-15 22:26:31.504942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.259 [2024-07-15 22:26:31.504954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.259 [2024-07-15 22:26:31.504963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.259 [2024-07-15 22:26:31.504975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.259 [2024-07-15 22:26:31.504984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.259 [2024-07-15 22:26:31.504997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.259 [2024-07-15 22:26:31.505005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.259 [2024-07-15 22:26:31.505016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.259 [2024-07-15 22:26:31.505022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.259 [2024-07-15 22:26:31.505032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.259 [2024-07-15 22:26:31.505039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.259 [2024-07-15 22:26:31.505048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.259 [2024-07-15 22:26:31.505056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.259 [2024-07-15 22:26:31.505065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.259 [2024-07-15 22:26:31.505072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.260 [2024-07-15 22:26:31.505088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.260 [2024-07-15 22:26:31.505105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.260 [2024-07-15 22:26:31.505219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.260 [2024-07-15 22:26:31.505237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.260 [2024-07-15 22:26:31.505256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.260 [2024-07-15 22:26:31.505273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.260 [2024-07-15 22:26:31.505289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.260 [2024-07-15 22:26:31.505305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.260 [2024-07-15 22:26:31.505321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.260 [2024-07-15 22:26:31.505338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.260 [2024-07-15 22:26:31.505354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.260 [2024-07-15 22:26:31.505370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.260 [2024-07-15 22:26:31.505386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.260 [2024-07-15 22:26:31.505402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.260 [2024-07-15 22:26:31.505418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.260 [2024-07-15 22:26:31.505434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.260 [2024-07-15 22:26:31.505786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.260 [2024-07-15 22:26:31.505792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.505801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.505808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.505817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.505824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.505833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.505840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.505849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.505856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.505867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.505874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.505883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.505890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.505900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.505908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.505917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.505924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.505933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.505940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.505948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.505956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.505965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.505971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.505980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.505987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.505996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.261 [2024-07-15 22:26:31.506382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.261 [2024-07-15 22:26:31.506391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.262 [2024-07-15 22:26:31.506785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.262 [2024-07-15 22:26:31.506801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.262 [2024-07-15 22:26:31.506817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.262 [2024-07-15 22:26:31.506833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.262 [2024-07-15 22:26:31.506849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.262 [2024-07-15 22:26:31.506865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.262 [2024-07-15 22:26:31.506881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.262 [2024-07-15 22:26:31.506899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.262 [2024-07-15 22:26:31.506914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.262 [2024-07-15 22:26:31.506930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.262 [2024-07-15 22:26:31.506946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.262 [2024-07-15 22:26:31.506962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.262 [2024-07-15 22:26:31.506978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.506987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.262 [2024-07-15 22:26:31.506994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.262 [2024-07-15 22:26:31.507002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe57a00 is same with the state(5) to be set 00:29:06.262 [2024-07-15 22:26:31.507010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:06.262 [2024-07-15 22:26:31.507016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:06.262 [2024-07-15 22:26:31.507023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125712 len:8 PRP1 0x0 PRP2 0x0 00:29:06.263 [2024-07-15 22:26:31.507029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.263 [2024-07-15 22:26:31.507067] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe57a00 was disconnected and freed. reset controller. 00:29:06.263 [2024-07-15 22:26:31.510641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.263 [2024-07-15 22:26:31.510688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.263 [2024-07-15 22:26:31.511700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.263 [2024-07-15 22:26:31.511737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.263 [2024-07-15 22:26:31.511747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.263 [2024-07-15 22:26:31.511989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.263 [2024-07-15 22:26:31.512217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.263 [2024-07-15 22:26:31.512231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.263 [2024-07-15 22:26:31.512241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.263 [2024-07-15 22:26:31.515753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.263 [2024-07-15 22:26:31.524647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.263 [2024-07-15 22:26:31.525392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.263 [2024-07-15 22:26:31.525429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.263 [2024-07-15 22:26:31.525439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.263 [2024-07-15 22:26:31.525677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.263 [2024-07-15 22:26:31.525897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.263 [2024-07-15 22:26:31.525906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.263 [2024-07-15 22:26:31.525915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.263 [2024-07-15 22:26:31.529436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.263 [2024-07-15 22:26:31.538545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.263 [2024-07-15 22:26:31.539367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.263 [2024-07-15 22:26:31.539404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.263 [2024-07-15 22:26:31.539417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.263 [2024-07-15 22:26:31.539654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.263 [2024-07-15 22:26:31.539874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.263 [2024-07-15 22:26:31.539883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.263 [2024-07-15 22:26:31.539891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.263 [2024-07-15 22:26:31.543421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.263 [2024-07-15 22:26:31.552323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.263 [2024-07-15 22:26:31.553094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.263 [2024-07-15 22:26:31.553137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.263 [2024-07-15 22:26:31.553149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.263 [2024-07-15 22:26:31.553386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.263 [2024-07-15 22:26:31.553606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.263 [2024-07-15 22:26:31.553615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.263 [2024-07-15 22:26:31.553622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.263 [2024-07-15 22:26:31.557134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.263 [2024-07-15 22:26:31.566281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.263 [2024-07-15 22:26:31.566970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.263 [2024-07-15 22:26:31.566987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.263 [2024-07-15 22:26:31.566995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.263 [2024-07-15 22:26:31.567218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.263 [2024-07-15 22:26:31.567436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.263 [2024-07-15 22:26:31.567443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.263 [2024-07-15 22:26:31.567450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.263 [2024-07-15 22:26:31.570954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.263 [2024-07-15 22:26:31.580049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.263 [2024-07-15 22:26:31.580675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.263 [2024-07-15 22:26:31.580692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.263 [2024-07-15 22:26:31.580699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.263 [2024-07-15 22:26:31.580915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.527 [2024-07-15 22:26:31.581137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.527 [2024-07-15 22:26:31.581145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.527 [2024-07-15 22:26:31.581152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.527 [2024-07-15 22:26:31.584654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.527 [2024-07-15 22:26:31.593951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.527 [2024-07-15 22:26:31.594665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.527 [2024-07-15 22:26:31.594702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.527 [2024-07-15 22:26:31.594712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.527 [2024-07-15 22:26:31.594949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.527 [2024-07-15 22:26:31.595184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.527 [2024-07-15 22:26:31.595194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.527 [2024-07-15 22:26:31.595201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.527 [2024-07-15 22:26:31.598709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.527 [2024-07-15 22:26:31.607798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.527 [2024-07-15 22:26:31.608516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.527 [2024-07-15 22:26:31.608553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.527 [2024-07-15 22:26:31.608563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.527 [2024-07-15 22:26:31.608804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.527 [2024-07-15 22:26:31.609025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.527 [2024-07-15 22:26:31.609033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.527 [2024-07-15 22:26:31.609040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.527 [2024-07-15 22:26:31.612554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.527 [2024-07-15 22:26:31.621637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.527 [2024-07-15 22:26:31.622364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.527 [2024-07-15 22:26:31.622400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.527 [2024-07-15 22:26:31.622411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.527 [2024-07-15 22:26:31.622647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.527 [2024-07-15 22:26:31.622867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.527 [2024-07-15 22:26:31.622876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.527 [2024-07-15 22:26:31.622883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.527 [2024-07-15 22:26:31.626399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.527 [2024-07-15 22:26:31.635488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.527 [2024-07-15 22:26:31.636223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.527 [2024-07-15 22:26:31.636260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.527 [2024-07-15 22:26:31.636271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.527 [2024-07-15 22:26:31.636509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.527 [2024-07-15 22:26:31.636729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.527 [2024-07-15 22:26:31.636738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.527 [2024-07-15 22:26:31.636746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.527 [2024-07-15 22:26:31.640265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.527 [2024-07-15 22:26:31.649373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.527 [2024-07-15 22:26:31.650141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.527 [2024-07-15 22:26:31.650177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.527 [2024-07-15 22:26:31.650189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.527 [2024-07-15 22:26:31.650428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.527 [2024-07-15 22:26:31.650649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.527 [2024-07-15 22:26:31.650657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.527 [2024-07-15 22:26:31.650669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.527 [2024-07-15 22:26:31.654186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.527 [2024-07-15 22:26:31.663275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.527 [2024-07-15 22:26:31.663925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.527 [2024-07-15 22:26:31.663943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.527 [2024-07-15 22:26:31.663951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.527 [2024-07-15 22:26:31.664175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.527 [2024-07-15 22:26:31.664392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.527 [2024-07-15 22:26:31.664399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.527 [2024-07-15 22:26:31.664406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.527 [2024-07-15 22:26:31.667907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.527 [2024-07-15 22:26:31.677197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.527 [2024-07-15 22:26:31.677873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.527 [2024-07-15 22:26:31.677888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.527 [2024-07-15 22:26:31.677895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.527 [2024-07-15 22:26:31.678112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.527 [2024-07-15 22:26:31.678334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.527 [2024-07-15 22:26:31.678342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.527 [2024-07-15 22:26:31.678348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.527 [2024-07-15 22:26:31.681848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.527 [2024-07-15 22:26:31.691135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.527 [2024-07-15 22:26:31.691799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.527 [2024-07-15 22:26:31.691814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.527 [2024-07-15 22:26:31.691821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.527 [2024-07-15 22:26:31.692037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.527 [2024-07-15 22:26:31.692259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.527 [2024-07-15 22:26:31.692268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.527 [2024-07-15 22:26:31.692274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.527 [2024-07-15 22:26:31.695773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.527 [2024-07-15 22:26:31.705056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.527 [2024-07-15 22:26:31.705732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.527 [2024-07-15 22:26:31.705747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.527 [2024-07-15 22:26:31.705755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.527 [2024-07-15 22:26:31.705970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.527 [2024-07-15 22:26:31.706191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.527 [2024-07-15 22:26:31.706199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.527 [2024-07-15 22:26:31.706206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.527 [2024-07-15 22:26:31.709706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.527 [2024-07-15 22:26:31.718987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.527 [2024-07-15 22:26:31.719687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.527 [2024-07-15 22:26:31.719723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.527 [2024-07-15 22:26:31.719734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.527 [2024-07-15 22:26:31.719971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.527 [2024-07-15 22:26:31.720201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.527 [2024-07-15 22:26:31.720211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.527 [2024-07-15 22:26:31.720218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.527 [2024-07-15 22:26:31.723729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.527 [2024-07-15 22:26:31.732833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.527 [2024-07-15 22:26:31.733567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.527 [2024-07-15 22:26:31.733604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.527 [2024-07-15 22:26:31.733614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.528 [2024-07-15 22:26:31.733851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.528 [2024-07-15 22:26:31.734071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.528 [2024-07-15 22:26:31.734079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.528 [2024-07-15 22:26:31.734087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.528 [2024-07-15 22:26:31.737605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.528 [2024-07-15 22:26:31.746706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.528 [2024-07-15 22:26:31.747408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.528 [2024-07-15 22:26:31.747445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.528 [2024-07-15 22:26:31.747456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.528 [2024-07-15 22:26:31.747692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.528 [2024-07-15 22:26:31.747916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.528 [2024-07-15 22:26:31.747925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.528 [2024-07-15 22:26:31.747932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.528 [2024-07-15 22:26:31.751445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.528 [2024-07-15 22:26:31.760545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.528 [2024-07-15 22:26:31.761101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.528 [2024-07-15 22:26:31.761119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.528 [2024-07-15 22:26:31.761133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.528 [2024-07-15 22:26:31.761349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.528 [2024-07-15 22:26:31.761566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.528 [2024-07-15 22:26:31.761573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.528 [2024-07-15 22:26:31.761580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.528 [2024-07-15 22:26:31.765097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.528 [2024-07-15 22:26:31.774416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.528 [2024-07-15 22:26:31.774972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.528 [2024-07-15 22:26:31.774988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.528 [2024-07-15 22:26:31.774996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.528 [2024-07-15 22:26:31.775218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.528 [2024-07-15 22:26:31.775435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.528 [2024-07-15 22:26:31.775443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.528 [2024-07-15 22:26:31.775450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.528 [2024-07-15 22:26:31.778954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.528 [2024-07-15 22:26:31.788263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.528 [2024-07-15 22:26:31.788889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.528 [2024-07-15 22:26:31.788903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.528 [2024-07-15 22:26:31.788911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.528 [2024-07-15 22:26:31.789132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.528 [2024-07-15 22:26:31.789348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.528 [2024-07-15 22:26:31.789356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.528 [2024-07-15 22:26:31.789362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.528 [2024-07-15 22:26:31.792874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.528 [2024-07-15 22:26:31.802180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.528 [2024-07-15 22:26:31.802815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.528 [2024-07-15 22:26:31.802830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.528 [2024-07-15 22:26:31.802837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.528 [2024-07-15 22:26:31.803052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.528 [2024-07-15 22:26:31.803274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.528 [2024-07-15 22:26:31.803283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.528 [2024-07-15 22:26:31.803289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.528 [2024-07-15 22:26:31.806795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.528 [2024-07-15 22:26:31.816131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.528 [2024-07-15 22:26:31.816843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.528 [2024-07-15 22:26:31.816880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.528 [2024-07-15 22:26:31.816890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.528 [2024-07-15 22:26:31.817135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.528 [2024-07-15 22:26:31.817357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.528 [2024-07-15 22:26:31.817366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.528 [2024-07-15 22:26:31.817374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.528 [2024-07-15 22:26:31.820882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.528 [2024-07-15 22:26:31.829984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.528 [2024-07-15 22:26:31.830664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.528 [2024-07-15 22:26:31.830682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.528 [2024-07-15 22:26:31.830690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.528 [2024-07-15 22:26:31.830907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.528 [2024-07-15 22:26:31.831130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.528 [2024-07-15 22:26:31.831138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.528 [2024-07-15 22:26:31.831145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.528 [2024-07-15 22:26:31.834648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.528 [2024-07-15 22:26:31.843750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.528 [2024-07-15 22:26:31.844437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.528 [2024-07-15 22:26:31.844473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.528 [2024-07-15 22:26:31.844489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.528 [2024-07-15 22:26:31.844726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.528 [2024-07-15 22:26:31.844946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.528 [2024-07-15 22:26:31.844955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.528 [2024-07-15 22:26:31.844963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.528 [2024-07-15 22:26:31.848481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.790 [2024-07-15 22:26:31.857585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.790 [2024-07-15 22:26:31.858360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.790 [2024-07-15 22:26:31.858396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.790 [2024-07-15 22:26:31.858407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.790 [2024-07-15 22:26:31.858643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.790 [2024-07-15 22:26:31.858863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.790 [2024-07-15 22:26:31.858872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.790 [2024-07-15 22:26:31.858879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.790 [2024-07-15 22:26:31.862399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.790 [2024-07-15 22:26:31.871500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.790 [2024-07-15 22:26:31.872166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.790 [2024-07-15 22:26:31.872190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.790 [2024-07-15 22:26:31.872199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.790 [2024-07-15 22:26:31.872421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.790 [2024-07-15 22:26:31.872638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.790 [2024-07-15 22:26:31.872646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.791 [2024-07-15 22:26:31.872653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.791 [2024-07-15 22:26:31.876171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.791 [2024-07-15 22:26:31.885265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.791 [2024-07-15 22:26:31.886047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.791 [2024-07-15 22:26:31.886084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.791 [2024-07-15 22:26:31.886094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.791 [2024-07-15 22:26:31.886339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.791 [2024-07-15 22:26:31.886560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.791 [2024-07-15 22:26:31.886573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.791 [2024-07-15 22:26:31.886581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.791 [2024-07-15 22:26:31.890090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.791 [2024-07-15 22:26:31.899183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.791 [2024-07-15 22:26:31.899911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.791 [2024-07-15 22:26:31.899947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.791 [2024-07-15 22:26:31.899958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.791 [2024-07-15 22:26:31.900202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.791 [2024-07-15 22:26:31.900423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.791 [2024-07-15 22:26:31.900431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.791 [2024-07-15 22:26:31.900439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.791 [2024-07-15 22:26:31.903943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.791 [2024-07-15 22:26:31.913033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.791 [2024-07-15 22:26:31.913741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.791 [2024-07-15 22:26:31.913778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.791 [2024-07-15 22:26:31.913789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.791 [2024-07-15 22:26:31.914025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.791 [2024-07-15 22:26:31.914255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.791 [2024-07-15 22:26:31.914264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.791 [2024-07-15 22:26:31.914271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.791 [2024-07-15 22:26:31.917778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.791 [2024-07-15 22:26:31.926867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.791 [2024-07-15 22:26:31.927599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.791 [2024-07-15 22:26:31.927636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.791 [2024-07-15 22:26:31.927646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.791 [2024-07-15 22:26:31.927883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.791 [2024-07-15 22:26:31.928103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.791 [2024-07-15 22:26:31.928111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.791 [2024-07-15 22:26:31.928118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.791 [2024-07-15 22:26:31.931633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.791 [2024-07-15 22:26:31.940735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.791 [2024-07-15 22:26:31.941435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.791 [2024-07-15 22:26:31.941472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.791 [2024-07-15 22:26:31.941482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.791 [2024-07-15 22:26:31.941719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.791 [2024-07-15 22:26:31.941940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.791 [2024-07-15 22:26:31.941948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.791 [2024-07-15 22:26:31.941955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.791 [2024-07-15 22:26:31.945482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.791 [2024-07-15 22:26:31.954573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.791 [2024-07-15 22:26:31.955369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.791 [2024-07-15 22:26:31.955405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.791 [2024-07-15 22:26:31.955417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.791 [2024-07-15 22:26:31.955655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.791 [2024-07-15 22:26:31.955875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.791 [2024-07-15 22:26:31.955883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.791 [2024-07-15 22:26:31.955892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.791 [2024-07-15 22:26:31.959409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.791 [2024-07-15 22:26:31.968506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.791 [2024-07-15 22:26:31.969214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.791 [2024-07-15 22:26:31.969251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.791 [2024-07-15 22:26:31.969262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.791 [2024-07-15 22:26:31.969498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.791 [2024-07-15 22:26:31.969718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.791 [2024-07-15 22:26:31.969727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.791 [2024-07-15 22:26:31.969734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.791 [2024-07-15 22:26:31.973249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.791 [2024-07-15 22:26:31.982340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.791 [2024-07-15 22:26:31.983116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.791 [2024-07-15 22:26:31.983160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.791 [2024-07-15 22:26:31.983171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.791 [2024-07-15 22:26:31.983415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.791 [2024-07-15 22:26:31.983636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.791 [2024-07-15 22:26:31.983645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.791 [2024-07-15 22:26:31.983653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.791 [2024-07-15 22:26:31.987161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.791 [2024-07-15 22:26:31.996256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.791 [2024-07-15 22:26:31.996929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.791 [2024-07-15 22:26:31.996947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.791 [2024-07-15 22:26:31.996954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.791 [2024-07-15 22:26:31.997176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.791 [2024-07-15 22:26:31.997393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.791 [2024-07-15 22:26:31.997400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.791 [2024-07-15 22:26:31.997407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.791 [2024-07-15 22:26:32.000908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.791 [2024-07-15 22:26:32.010000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.791 [2024-07-15 22:26:32.010645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.791 [2024-07-15 22:26:32.010661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.791 [2024-07-15 22:26:32.010669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.791 [2024-07-15 22:26:32.010884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.791 [2024-07-15 22:26:32.011100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.791 [2024-07-15 22:26:32.011108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.791 [2024-07-15 22:26:32.011115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.791 [2024-07-15 22:26:32.014618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.791 [2024-07-15 22:26:32.023905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.791 [2024-07-15 22:26:32.024535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.791 [2024-07-15 22:26:32.024550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.791 [2024-07-15 22:26:32.024557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.791 [2024-07-15 22:26:32.024947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.791 [2024-07-15 22:26:32.025216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.791 [2024-07-15 22:26:32.025226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.791 [2024-07-15 22:26:32.025237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.791 [2024-07-15 22:26:32.028741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.791 [2024-07-15 22:26:32.037832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.791 [2024-07-15 22:26:32.038562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.791 [2024-07-15 22:26:32.038599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.791 [2024-07-15 22:26:32.038609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.792 [2024-07-15 22:26:32.038846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.792 [2024-07-15 22:26:32.039066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.792 [2024-07-15 22:26:32.039075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.792 [2024-07-15 22:26:32.039082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.792 [2024-07-15 22:26:32.042597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.792 [2024-07-15 22:26:32.051695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.792 [2024-07-15 22:26:32.052494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.792 [2024-07-15 22:26:32.052531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.792 [2024-07-15 22:26:32.052541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.792 [2024-07-15 22:26:32.052778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.792 [2024-07-15 22:26:32.052998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.792 [2024-07-15 22:26:32.053007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.792 [2024-07-15 22:26:32.053014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.792 [2024-07-15 22:26:32.056528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.792 [2024-07-15 22:26:32.065617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.792 [2024-07-15 22:26:32.066347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.792 [2024-07-15 22:26:32.066384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.792 [2024-07-15 22:26:32.066394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.792 [2024-07-15 22:26:32.066631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.792 [2024-07-15 22:26:32.066851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.792 [2024-07-15 22:26:32.066859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.792 [2024-07-15 22:26:32.066867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.792 [2024-07-15 22:26:32.070382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.792 [2024-07-15 22:26:32.079479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.792 [2024-07-15 22:26:32.080227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.792 [2024-07-15 22:26:32.080264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.792 [2024-07-15 22:26:32.080276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.792 [2024-07-15 22:26:32.080513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.792 [2024-07-15 22:26:32.080734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.792 [2024-07-15 22:26:32.080742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.792 [2024-07-15 22:26:32.080749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.792 [2024-07-15 22:26:32.084263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.792 [2024-07-15 22:26:32.093354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.792 [2024-07-15 22:26:32.094086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.792 [2024-07-15 22:26:32.094130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.792 [2024-07-15 22:26:32.094142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.792 [2024-07-15 22:26:32.094379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.792 [2024-07-15 22:26:32.094598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.792 [2024-07-15 22:26:32.094606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.792 [2024-07-15 22:26:32.094614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.792 [2024-07-15 22:26:32.098117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.792 [2024-07-15 22:26:32.107211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.792 [2024-07-15 22:26:32.107993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.792 [2024-07-15 22:26:32.108030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:06.792 [2024-07-15 22:26:32.108042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:06.792 [2024-07-15 22:26:32.108289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:06.792 [2024-07-15 22:26:32.108510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.792 [2024-07-15 22:26:32.108519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.792 [2024-07-15 22:26:32.108526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.792 [2024-07-15 22:26:32.112033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.054 [2024-07-15 22:26:32.121119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.054 [2024-07-15 22:26:32.121887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.054 [2024-07-15 22:26:32.121924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.054 [2024-07-15 22:26:32.121936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.054 [2024-07-15 22:26:32.122185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.054 [2024-07-15 22:26:32.122407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.054 [2024-07-15 22:26:32.122416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.054 [2024-07-15 22:26:32.122423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.054 [2024-07-15 22:26:32.125934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.054 [2024-07-15 22:26:32.135028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.054 [2024-07-15 22:26:32.135742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.054 [2024-07-15 22:26:32.135779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.054 [2024-07-15 22:26:32.135789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.054 [2024-07-15 22:26:32.136025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.054 [2024-07-15 22:26:32.136255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.054 [2024-07-15 22:26:32.136264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.054 [2024-07-15 22:26:32.136272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.054 [2024-07-15 22:26:32.139778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.054 [2024-07-15 22:26:32.148880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.054 [2024-07-15 22:26:32.149616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.054 [2024-07-15 22:26:32.149652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.054 [2024-07-15 22:26:32.149663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.054 [2024-07-15 22:26:32.149899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.054 [2024-07-15 22:26:32.150119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.054 [2024-07-15 22:26:32.150137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.054 [2024-07-15 22:26:32.150144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.054 [2024-07-15 22:26:32.153652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.054 [2024-07-15 22:26:32.162739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.054 [2024-07-15 22:26:32.163398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.054 [2024-07-15 22:26:32.163435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.054 [2024-07-15 22:26:32.163446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.054 [2024-07-15 22:26:32.163682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.054 [2024-07-15 22:26:32.163902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.054 [2024-07-15 22:26:32.163911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.054 [2024-07-15 22:26:32.163918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.054 [2024-07-15 22:26:32.167438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.054 [2024-07-15 22:26:32.176527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.054 [2024-07-15 22:26:32.177151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.054 [2024-07-15 22:26:32.177188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.054 [2024-07-15 22:26:32.177198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.054 [2024-07-15 22:26:32.177435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.054 [2024-07-15 22:26:32.177655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.054 [2024-07-15 22:26:32.177663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.054 [2024-07-15 22:26:32.177670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.054 [2024-07-15 22:26:32.181185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.054 [2024-07-15 22:26:32.190479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.054 [2024-07-15 22:26:32.191205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.054 [2024-07-15 22:26:32.191242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.054 [2024-07-15 22:26:32.191254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.054 [2024-07-15 22:26:32.191491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.054 [2024-07-15 22:26:32.191711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.054 [2024-07-15 22:26:32.191720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.054 [2024-07-15 22:26:32.191727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.054 [2024-07-15 22:26:32.195244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.054 [2024-07-15 22:26:32.204338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.054 [2024-07-15 22:26:32.205065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.054 [2024-07-15 22:26:32.205101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.054 [2024-07-15 22:26:32.205113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.054 [2024-07-15 22:26:32.205361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.054 [2024-07-15 22:26:32.205583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.054 [2024-07-15 22:26:32.205591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.054 [2024-07-15 22:26:32.205598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.054 [2024-07-15 22:26:32.209104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.054 [2024-07-15 22:26:32.218198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.054 [2024-07-15 22:26:32.218923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.054 [2024-07-15 22:26:32.218964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.054 [2024-07-15 22:26:32.218974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.054 [2024-07-15 22:26:32.219219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.054 [2024-07-15 22:26:32.219440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.054 [2024-07-15 22:26:32.219448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.054 [2024-07-15 22:26:32.219456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.054 [2024-07-15 22:26:32.222960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.054 [2024-07-15 22:26:32.232053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.054 [2024-07-15 22:26:32.232805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.054 [2024-07-15 22:26:32.232841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.054 [2024-07-15 22:26:32.232852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.054 [2024-07-15 22:26:32.233088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.054 [2024-07-15 22:26:32.233317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.054 [2024-07-15 22:26:32.233326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.054 [2024-07-15 22:26:32.233334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.054 [2024-07-15 22:26:32.236840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.055 [2024-07-15 22:26:32.245942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.055 [2024-07-15 22:26:32.246671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.055 [2024-07-15 22:26:32.246707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.055 [2024-07-15 22:26:32.246718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.055 [2024-07-15 22:26:32.246954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.055 [2024-07-15 22:26:32.247183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.055 [2024-07-15 22:26:32.247193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.055 [2024-07-15 22:26:32.247200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.055 [2024-07-15 22:26:32.250706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.055 [2024-07-15 22:26:32.259790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.055 [2024-07-15 22:26:32.260509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.055 [2024-07-15 22:26:32.260545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.055 [2024-07-15 22:26:32.260555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.055 [2024-07-15 22:26:32.260791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.055 [2024-07-15 22:26:32.261018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.055 [2024-07-15 22:26:32.261027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.055 [2024-07-15 22:26:32.261035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.055 [2024-07-15 22:26:32.264549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.055 [2024-07-15 22:26:32.273640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.055 [2024-07-15 22:26:32.274406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.055 [2024-07-15 22:26:32.274442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.055 [2024-07-15 22:26:32.274453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.055 [2024-07-15 22:26:32.274689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.055 [2024-07-15 22:26:32.274909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.055 [2024-07-15 22:26:32.274918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.055 [2024-07-15 22:26:32.274925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.055 [2024-07-15 22:26:32.278440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.055 [2024-07-15 22:26:32.287533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.055 [2024-07-15 22:26:32.288328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.055 [2024-07-15 22:26:32.288365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.055 [2024-07-15 22:26:32.288375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.055 [2024-07-15 22:26:32.288611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.055 [2024-07-15 22:26:32.288831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.055 [2024-07-15 22:26:32.288840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.055 [2024-07-15 22:26:32.288847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.055 [2024-07-15 22:26:32.292363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.055 [2024-07-15 22:26:32.301457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.055 [2024-07-15 22:26:32.302116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.055 [2024-07-15 22:26:32.302159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.055 [2024-07-15 22:26:32.302171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.055 [2024-07-15 22:26:32.302408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.055 [2024-07-15 22:26:32.302628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.055 [2024-07-15 22:26:32.302637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.055 [2024-07-15 22:26:32.302644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.055 [2024-07-15 22:26:32.306156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.055 [2024-07-15 22:26:32.315250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.055 [2024-07-15 22:26:32.315847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.055 [2024-07-15 22:26:32.315883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.055 [2024-07-15 22:26:32.315893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.055 [2024-07-15 22:26:32.316139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.055 [2024-07-15 22:26:32.316360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.055 [2024-07-15 22:26:32.316369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.055 [2024-07-15 22:26:32.316376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.055 [2024-07-15 22:26:32.319882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.055 [2024-07-15 22:26:32.329188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.055 [2024-07-15 22:26:32.329941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.055 [2024-07-15 22:26:32.329978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.055 [2024-07-15 22:26:32.329989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.055 [2024-07-15 22:26:32.330234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.055 [2024-07-15 22:26:32.330455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.055 [2024-07-15 22:26:32.330463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.055 [2024-07-15 22:26:32.330471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.055 [2024-07-15 22:26:32.333975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.055 [2024-07-15 22:26:32.343065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.055 [2024-07-15 22:26:32.343822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.055 [2024-07-15 22:26:32.343858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.055 [2024-07-15 22:26:32.343869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.055 [2024-07-15 22:26:32.344106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.055 [2024-07-15 22:26:32.344335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.055 [2024-07-15 22:26:32.344345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.055 [2024-07-15 22:26:32.344352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.055 [2024-07-15 22:26:32.347858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.055 [2024-07-15 22:26:32.356952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.055 [2024-07-15 22:26:32.357679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.055 [2024-07-15 22:26:32.357716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.055 [2024-07-15 22:26:32.357730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.055 [2024-07-15 22:26:32.357967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.055 [2024-07-15 22:26:32.358196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.055 [2024-07-15 22:26:32.358205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.055 [2024-07-15 22:26:32.358213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.055 [2024-07-15 22:26:32.361720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.055 [2024-07-15 22:26:32.370820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.055 [2024-07-15 22:26:32.371571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.055 [2024-07-15 22:26:32.371608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.055 [2024-07-15 22:26:32.371618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.055 [2024-07-15 22:26:32.371854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.055 [2024-07-15 22:26:32.372074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.055 [2024-07-15 22:26:32.372083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.055 [2024-07-15 22:26:32.372091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.055 [2024-07-15 22:26:32.375606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.317 [2024-07-15 22:26:32.384696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.317 [2024-07-15 22:26:32.385404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.317 [2024-07-15 22:26:32.385441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.317 [2024-07-15 22:26:32.385451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.317 [2024-07-15 22:26:32.385687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.317 [2024-07-15 22:26:32.385908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.317 [2024-07-15 22:26:32.385916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.317 [2024-07-15 22:26:32.385923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.317 [2024-07-15 22:26:32.389441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.317 [2024-07-15 22:26:32.398534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.317 [2024-07-15 22:26:32.399374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.317 [2024-07-15 22:26:32.399410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.317 [2024-07-15 22:26:32.399421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.317 [2024-07-15 22:26:32.399657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.317 [2024-07-15 22:26:32.399877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.317 [2024-07-15 22:26:32.399885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.317 [2024-07-15 22:26:32.399897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.317 [2024-07-15 22:26:32.403414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.317 [2024-07-15 22:26:32.412300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.317 [2024-07-15 22:26:32.413071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.317 [2024-07-15 22:26:32.413108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.317 [2024-07-15 22:26:32.413119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.317 [2024-07-15 22:26:32.413368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.317 [2024-07-15 22:26:32.413588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.317 [2024-07-15 22:26:32.413597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.317 [2024-07-15 22:26:32.413604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.317 [2024-07-15 22:26:32.417113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.317 [2024-07-15 22:26:32.426204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.317 [2024-07-15 22:26:32.426971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.318 [2024-07-15 22:26:32.427007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.318 [2024-07-15 22:26:32.427018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.318 [2024-07-15 22:26:32.427262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.318 [2024-07-15 22:26:32.427484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.318 [2024-07-15 22:26:32.427492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.318 [2024-07-15 22:26:32.427500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.318 [2024-07-15 22:26:32.431005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.318 [2024-07-15 22:26:32.440095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.318 [2024-07-15 22:26:32.440778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.318 [2024-07-15 22:26:32.440796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.318 [2024-07-15 22:26:32.440804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.318 [2024-07-15 22:26:32.441021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.318 [2024-07-15 22:26:32.441244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.318 [2024-07-15 22:26:32.441252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.318 [2024-07-15 22:26:32.441259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.318 [2024-07-15 22:26:32.444770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.318 [2024-07-15 22:26:32.453855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.318 [2024-07-15 22:26:32.454472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.318 [2024-07-15 22:26:32.454488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.318 [2024-07-15 22:26:32.454495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.318 [2024-07-15 22:26:32.454712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.318 [2024-07-15 22:26:32.454927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.318 [2024-07-15 22:26:32.454935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.318 [2024-07-15 22:26:32.454942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.318 [2024-07-15 22:26:32.458528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.318 [2024-07-15 22:26:32.467615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.318 [2024-07-15 22:26:32.468378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.318 [2024-07-15 22:26:32.468416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.318 [2024-07-15 22:26:32.468426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.318 [2024-07-15 22:26:32.468663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.318 [2024-07-15 22:26:32.468883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.318 [2024-07-15 22:26:32.468891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.318 [2024-07-15 22:26:32.468899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.318 [2024-07-15 22:26:32.472415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.318 [2024-07-15 22:26:32.481508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.318 [2024-07-15 22:26:32.482223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.318 [2024-07-15 22:26:32.482260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.318 [2024-07-15 22:26:32.482271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.318 [2024-07-15 22:26:32.482511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.318 [2024-07-15 22:26:32.482731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.318 [2024-07-15 22:26:32.482739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.318 [2024-07-15 22:26:32.482747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.318 [2024-07-15 22:26:32.486260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.318 [2024-07-15 22:26:32.495346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.318 [2024-07-15 22:26:32.496146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.318 [2024-07-15 22:26:32.496183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.318 [2024-07-15 22:26:32.496193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.318 [2024-07-15 22:26:32.496434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.318 [2024-07-15 22:26:32.496655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.318 [2024-07-15 22:26:32.496663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.318 [2024-07-15 22:26:32.496671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.318 [2024-07-15 22:26:32.500187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.318 [2024-07-15 22:26:32.509277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.318 [2024-07-15 22:26:32.510041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.318 [2024-07-15 22:26:32.510078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.318 [2024-07-15 22:26:32.510089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.318 [2024-07-15 22:26:32.510337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.318 [2024-07-15 22:26:32.510559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.318 [2024-07-15 22:26:32.510568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.318 [2024-07-15 22:26:32.510576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.318 [2024-07-15 22:26:32.514081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.318 [2024-07-15 22:26:32.523181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.318 [2024-07-15 22:26:32.523911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.318 [2024-07-15 22:26:32.523948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.318 [2024-07-15 22:26:32.523959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.318 [2024-07-15 22:26:32.524202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.318 [2024-07-15 22:26:32.524423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.318 [2024-07-15 22:26:32.524432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.318 [2024-07-15 22:26:32.524439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.318 [2024-07-15 22:26:32.527944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.318 [2024-07-15 22:26:32.537035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.318 [2024-07-15 22:26:32.537749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.318 [2024-07-15 22:26:32.537786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.318 [2024-07-15 22:26:32.537797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.318 [2024-07-15 22:26:32.538033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.318 [2024-07-15 22:26:32.538262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.318 [2024-07-15 22:26:32.538271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.318 [2024-07-15 22:26:32.538282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.318 [2024-07-15 22:26:32.541788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.318 [2024-07-15 22:26:32.550972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.318 [2024-07-15 22:26:32.551575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.318 [2024-07-15 22:26:32.551611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.318 [2024-07-15 22:26:32.551623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.319 [2024-07-15 22:26:32.551860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.319 [2024-07-15 22:26:32.552080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.319 [2024-07-15 22:26:32.552088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.319 [2024-07-15 22:26:32.552096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.319 [2024-07-15 22:26:32.555614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.319 [2024-07-15 22:26:32.564923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.319 [2024-07-15 22:26:32.565709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.319 [2024-07-15 22:26:32.565746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.319 [2024-07-15 22:26:32.565756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.319 [2024-07-15 22:26:32.565993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.319 [2024-07-15 22:26:32.566218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.319 [2024-07-15 22:26:32.566228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.319 [2024-07-15 22:26:32.566236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.319 [2024-07-15 22:26:32.569744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.319 [2024-07-15 22:26:32.578833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.319 [2024-07-15 22:26:32.579562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.319 [2024-07-15 22:26:32.579599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.319 [2024-07-15 22:26:32.579610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.319 [2024-07-15 22:26:32.579846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.319 [2024-07-15 22:26:32.580067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.319 [2024-07-15 22:26:32.580075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.319 [2024-07-15 22:26:32.580082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.319 [2024-07-15 22:26:32.583596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.319 [2024-07-15 22:26:32.592687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.319 [2024-07-15 22:26:32.593331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.319 [2024-07-15 22:26:32.593355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.319 [2024-07-15 22:26:32.593365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.319 [2024-07-15 22:26:32.593585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.319 [2024-07-15 22:26:32.593802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.319 [2024-07-15 22:26:32.593809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.319 [2024-07-15 22:26:32.593815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.319 [2024-07-15 22:26:32.597322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.319 [2024-07-15 22:26:32.606616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.319 [2024-07-15 22:26:32.607393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.319 [2024-07-15 22:26:32.607430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.319 [2024-07-15 22:26:32.607440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.319 [2024-07-15 22:26:32.607677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.319 [2024-07-15 22:26:32.607897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.319 [2024-07-15 22:26:32.607906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.319 [2024-07-15 22:26:32.607913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.319 [2024-07-15 22:26:32.611427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.319 [2024-07-15 22:26:32.620521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.319 [2024-07-15 22:26:32.621191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.319 [2024-07-15 22:26:32.621210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.319 [2024-07-15 22:26:32.621218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.319 [2024-07-15 22:26:32.621434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.319 [2024-07-15 22:26:32.621651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.319 [2024-07-15 22:26:32.621658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.319 [2024-07-15 22:26:32.621665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.319 [2024-07-15 22:26:32.625174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.319 [2024-07-15 22:26:32.634466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.319 [2024-07-15 22:26:32.635239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.319 [2024-07-15 22:26:32.635276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.319 [2024-07-15 22:26:32.635287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.319 [2024-07-15 22:26:32.635525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.319 [2024-07-15 22:26:32.635750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.319 [2024-07-15 22:26:32.635759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.319 [2024-07-15 22:26:32.635766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.319 [2024-07-15 22:26:32.639281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.581 [2024-07-15 22:26:32.648383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.581 [2024-07-15 22:26:32.649058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-15 22:26:32.649075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.581 [2024-07-15 22:26:32.649083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.581 [2024-07-15 22:26:32.649307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.581 [2024-07-15 22:26:32.649524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.581 [2024-07-15 22:26:32.649532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.581 [2024-07-15 22:26:32.649539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.581 [2024-07-15 22:26:32.653040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.581 [2024-07-15 22:26:32.662130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.581 [2024-07-15 22:26:32.662742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-15 22:26:32.662778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.581 [2024-07-15 22:26:32.662789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.581 [2024-07-15 22:26:32.663025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.581 [2024-07-15 22:26:32.663253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.581 [2024-07-15 22:26:32.663262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.581 [2024-07-15 22:26:32.663270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.581 [2024-07-15 22:26:32.666779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.581 [2024-07-15 22:26:32.675916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.581 [2024-07-15 22:26:32.676676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-15 22:26:32.676713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.581 [2024-07-15 22:26:32.676724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.581 [2024-07-15 22:26:32.676960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.581 [2024-07-15 22:26:32.677187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.581 [2024-07-15 22:26:32.677197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.581 [2024-07-15 22:26:32.677204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.581 [2024-07-15 22:26:32.680722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.581 [2024-07-15 22:26:32.689824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.581 [2024-07-15 22:26:32.690462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-15 22:26:32.690480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.581 [2024-07-15 22:26:32.690488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.581 [2024-07-15 22:26:32.690705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.581 [2024-07-15 22:26:32.690921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.581 [2024-07-15 22:26:32.690929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.581 [2024-07-15 22:26:32.690936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.581 [2024-07-15 22:26:32.694445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.581 [2024-07-15 22:26:32.703745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.581 [2024-07-15 22:26:32.704461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-15 22:26:32.704498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.581 [2024-07-15 22:26:32.704509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.581 [2024-07-15 22:26:32.704745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.581 [2024-07-15 22:26:32.704965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.581 [2024-07-15 22:26:32.704973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.581 [2024-07-15 22:26:32.704980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.581 [2024-07-15 22:26:32.708495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.581 [2024-07-15 22:26:32.717595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.581 [2024-07-15 22:26:32.718418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-15 22:26:32.718456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.581 [2024-07-15 22:26:32.718468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.581 [2024-07-15 22:26:32.718705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.581 [2024-07-15 22:26:32.718925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.581 [2024-07-15 22:26:32.718934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.581 [2024-07-15 22:26:32.718941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.581 [2024-07-15 22:26:32.722454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.581 [2024-07-15 22:26:32.731339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.581 [2024-07-15 22:26:32.732013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-15 22:26:32.732030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.582 [2024-07-15 22:26:32.732042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.582 [2024-07-15 22:26:32.732265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.582 [2024-07-15 22:26:32.732482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.582 [2024-07-15 22:26:32.732490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.582 [2024-07-15 22:26:32.732497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.582 [2024-07-15 22:26:32.735996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.582 [2024-07-15 22:26:32.745098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.582 [2024-07-15 22:26:32.745841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-15 22:26:32.745878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.582 [2024-07-15 22:26:32.745889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.582 [2024-07-15 22:26:32.746133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.582 [2024-07-15 22:26:32.746355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.582 [2024-07-15 22:26:32.746363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.582 [2024-07-15 22:26:32.746371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.582 [2024-07-15 22:26:32.749878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.582 [2024-07-15 22:26:32.758969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.582 [2024-07-15 22:26:32.759742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-15 22:26:32.759778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.582 [2024-07-15 22:26:32.759788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.582 [2024-07-15 22:26:32.760024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.582 [2024-07-15 22:26:32.760252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.582 [2024-07-15 22:26:32.760262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.582 [2024-07-15 22:26:32.760270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.582 [2024-07-15 22:26:32.763785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.582 [2024-07-15 22:26:32.772888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.582 [2024-07-15 22:26:32.773512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-15 22:26:32.773530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.582 [2024-07-15 22:26:32.773538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.582 [2024-07-15 22:26:32.773755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.582 [2024-07-15 22:26:32.773971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.582 [2024-07-15 22:26:32.773982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.582 [2024-07-15 22:26:32.773989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.582 [2024-07-15 22:26:32.777498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.582 [2024-07-15 22:26:32.786795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.582 [2024-07-15 22:26:32.787427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-15 22:26:32.787444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.582 [2024-07-15 22:26:32.787451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.582 [2024-07-15 22:26:32.787668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.582 [2024-07-15 22:26:32.787884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.582 [2024-07-15 22:26:32.787891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.582 [2024-07-15 22:26:32.787898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.582 [2024-07-15 22:26:32.791404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.582 [2024-07-15 22:26:32.800698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.582 [2024-07-15 22:26:32.801420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-15 22:26:32.801458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.582 [2024-07-15 22:26:32.801468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.582 [2024-07-15 22:26:32.801704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.582 [2024-07-15 22:26:32.801925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.582 [2024-07-15 22:26:32.801933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.582 [2024-07-15 22:26:32.801940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.582 [2024-07-15 22:26:32.805457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.582 [2024-07-15 22:26:32.814554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.582 [2024-07-15 22:26:32.815245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-15 22:26:32.815282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.582 [2024-07-15 22:26:32.815292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.582 [2024-07-15 22:26:32.815528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.582 [2024-07-15 22:26:32.815748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.582 [2024-07-15 22:26:32.815758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.582 [2024-07-15 22:26:32.815765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.582 [2024-07-15 22:26:32.819284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.582 [2024-07-15 22:26:32.828382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.582 [2024-07-15 22:26:32.829065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-15 22:26:32.829083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.582 [2024-07-15 22:26:32.829090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.582 [2024-07-15 22:26:32.829312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.582 [2024-07-15 22:26:32.829529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.582 [2024-07-15 22:26:32.829537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.582 [2024-07-15 22:26:32.829543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.582 [2024-07-15 22:26:32.833042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.582 [2024-07-15 22:26:32.842135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.582 [2024-07-15 22:26:32.842770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-15 22:26:32.842785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.583 [2024-07-15 22:26:32.842793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.583 [2024-07-15 22:26:32.843009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.583 [2024-07-15 22:26:32.843239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.583 [2024-07-15 22:26:32.843247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.583 [2024-07-15 22:26:32.843253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.583 [2024-07-15 22:26:32.846756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.583 [2024-07-15 22:26:32.856051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.583 [2024-07-15 22:26:32.856696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-15 22:26:32.856712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.583 [2024-07-15 22:26:32.856719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.583 [2024-07-15 22:26:32.856935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.583 [2024-07-15 22:26:32.857157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.583 [2024-07-15 22:26:32.857165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.583 [2024-07-15 22:26:32.857171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.583 [2024-07-15 22:26:32.860670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.583 [2024-07-15 22:26:32.869976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.583 [2024-07-15 22:26:32.870655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-15 22:26:32.870691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.583 [2024-07-15 22:26:32.870702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.583 [2024-07-15 22:26:32.870942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.583 [2024-07-15 22:26:32.871170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.583 [2024-07-15 22:26:32.871180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.583 [2024-07-15 22:26:32.871187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.583 [2024-07-15 22:26:32.874696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.583 [2024-07-15 22:26:32.883791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.583 [2024-07-15 22:26:32.884438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-15 22:26:32.884456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.583 [2024-07-15 22:26:32.884464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.583 [2024-07-15 22:26:32.884681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.583 [2024-07-15 22:26:32.884897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.583 [2024-07-15 22:26:32.884905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.583 [2024-07-15 22:26:32.884912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.583 [2024-07-15 22:26:32.888417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.583 [2024-07-15 22:26:32.897716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.583 [2024-07-15 22:26:32.898358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-15 22:26:32.898374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.583 [2024-07-15 22:26:32.898381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.583 [2024-07-15 22:26:32.898597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.583 [2024-07-15 22:26:32.898814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.583 [2024-07-15 22:26:32.898824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.583 [2024-07-15 22:26:32.898831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.583 [2024-07-15 22:26:32.902338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.845 [2024-07-15 22:26:32.911633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.845 [2024-07-15 22:26:32.912295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-07-15 22:26:32.912311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.845 [2024-07-15 22:26:32.912318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.845 [2024-07-15 22:26:32.912534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.845 [2024-07-15 22:26:32.912750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.845 [2024-07-15 22:26:32.912758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.845 [2024-07-15 22:26:32.912768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.845 [2024-07-15 22:26:32.916271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.845 [2024-07-15 22:26:32.925567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.845 [2024-07-15 22:26:32.926223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-07-15 22:26:32.926238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.845 [2024-07-15 22:26:32.926246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.845 [2024-07-15 22:26:32.926462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.845 [2024-07-15 22:26:32.926677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.845 [2024-07-15 22:26:32.926685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.845 [2024-07-15 22:26:32.926691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.845 [2024-07-15 22:26:32.930195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.845 [2024-07-15 22:26:32.939514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.845 [2024-07-15 22:26:32.940171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-07-15 22:26:32.940187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.845 [2024-07-15 22:26:32.940194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.845 [2024-07-15 22:26:32.940410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.845 [2024-07-15 22:26:32.940626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.845 [2024-07-15 22:26:32.940634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.845 [2024-07-15 22:26:32.940640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.845 [2024-07-15 22:26:32.944154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.845 [2024-07-15 22:26:32.953451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.845 [2024-07-15 22:26:32.954141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-07-15 22:26:32.954177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.845 [2024-07-15 22:26:32.954189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.846 [2024-07-15 22:26:32.954429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.846 [2024-07-15 22:26:32.954649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 22:26:32.954657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 22:26:32.954665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 22:26:32.958185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 22:26:32.967296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 22:26:32.968057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 22:26:32.968103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 22:26:32.968115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.846 [2024-07-15 22:26:32.968360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.846 [2024-07-15 22:26:32.968581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 22:26:32.968590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 22:26:32.968597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 22:26:32.972107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 22:26:32.981208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 22:26:32.981961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 22:26:32.981994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 22:26:32.982005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.846 [2024-07-15 22:26:32.982253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.846 [2024-07-15 22:26:32.982475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 22:26:32.982483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 22:26:32.982491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 22:26:32.985996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 22:26:32.995095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 22:26:32.995786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 22:26:32.995803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 22:26:32.995811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.846 [2024-07-15 22:26:32.996028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.846 [2024-07-15 22:26:32.996250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 22:26:32.996259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 22:26:32.996266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 22:26:32.999769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 22:26:33.008863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 22:26:33.009621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 22:26:33.009658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 22:26:33.009668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.846 [2024-07-15 22:26:33.009905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.846 [2024-07-15 22:26:33.010138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 22:26:33.010148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 22:26:33.010155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 22:26:33.013663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 22:26:33.022754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 22:26:33.023497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 22:26:33.023534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 22:26:33.023545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.846 [2024-07-15 22:26:33.023781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.846 [2024-07-15 22:26:33.024002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 22:26:33.024010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 22:26:33.024018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 22:26:33.027722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 22:26:33.036619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 22:26:33.037321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 22:26:33.037358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 22:26:33.037369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.846 [2024-07-15 22:26:33.037605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.846 [2024-07-15 22:26:33.037825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 22:26:33.037834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 22:26:33.037841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 22:26:33.041359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 22:26:33.050463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 22:26:33.051219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 22:26:33.051256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 22:26:33.051267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.846 [2024-07-15 22:26:33.051503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.846 [2024-07-15 22:26:33.051723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 22:26:33.051731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 22:26:33.051738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 22:26:33.055253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 22:26:33.064355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 22:26:33.065160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 22:26:33.065198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 22:26:33.065209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.846 [2024-07-15 22:26:33.065449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.846 [2024-07-15 22:26:33.065670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 22:26:33.065678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 22:26:33.065686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 22:26:33.069200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 22:26:33.078294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 22:26:33.078934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 22:26:33.078951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 22:26:33.078959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.846 [2024-07-15 22:26:33.079182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.846 [2024-07-15 22:26:33.079399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 22:26:33.079406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 22:26:33.079413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 22:26:33.082915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 22:26:33.092209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 22:26:33.092944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 22:26:33.092981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 22:26:33.092991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.846 [2024-07-15 22:26:33.093235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.846 [2024-07-15 22:26:33.093456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 22:26:33.093464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 22:26:33.093472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 22:26:33.096976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 22:26:33.106069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 22:26:33.106711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 22:26:33.106730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 22:26:33.106743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.846 [2024-07-15 22:26:33.106960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.846 [2024-07-15 22:26:33.107182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 22:26:33.107190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 22:26:33.107197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 22:26:33.110695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 22:26:33.119986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 22:26:33.120745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 22:26:33.120781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 22:26:33.120792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.846 [2024-07-15 22:26:33.121028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.846 [2024-07-15 22:26:33.121256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 22:26:33.121266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 22:26:33.121273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 22:26:33.124781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 22:26:33.133871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 22:26:33.134516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 22:26:33.134534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 22:26:33.134542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.846 [2024-07-15 22:26:33.134758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.846 [2024-07-15 22:26:33.134974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 22:26:33.134982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 22:26:33.134989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 22:26:33.138494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 22:26:33.147799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 22:26:33.148514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-07-15 22:26:33.148551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.847 [2024-07-15 22:26:33.148562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.847 [2024-07-15 22:26:33.148798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.847 [2024-07-15 22:26:33.149018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.847 [2024-07-15 22:26:33.149031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.847 [2024-07-15 22:26:33.149039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.847 [2024-07-15 22:26:33.152554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.847 [2024-07-15 22:26:33.161652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.847 [2024-07-15 22:26:33.162326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-07-15 22:26:33.162345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:07.847 [2024-07-15 22:26:33.162352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:07.847 [2024-07-15 22:26:33.162570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:07.847 [2024-07-15 22:26:33.162787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.847 [2024-07-15 22:26:33.162795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.847 [2024-07-15 22:26:33.162801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.847 [2024-07-15 22:26:33.166311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.107 [2024-07-15 22:26:33.175404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.107 [2024-07-15 22:26:33.176064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.107 [2024-07-15 22:26:33.176080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.107 [2024-07-15 22:26:33.176087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.107 [2024-07-15 22:26:33.176308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.107 [2024-07-15 22:26:33.176525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.107 [2024-07-15 22:26:33.176533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.107 [2024-07-15 22:26:33.176539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.107 [2024-07-15 22:26:33.180038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.107 [2024-07-15 22:26:33.189334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.107 [2024-07-15 22:26:33.189968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.107 [2024-07-15 22:26:33.189983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.107 [2024-07-15 22:26:33.189990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.107 [2024-07-15 22:26:33.190211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.107 [2024-07-15 22:26:33.190428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.107 [2024-07-15 22:26:33.190435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.107 [2024-07-15 22:26:33.190442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.107 [2024-07-15 22:26:33.193942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.107 [2024-07-15 22:26:33.203237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.107 [2024-07-15 22:26:33.203964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.107 [2024-07-15 22:26:33.204001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.107 [2024-07-15 22:26:33.204014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.107 [2024-07-15 22:26:33.204260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.107 [2024-07-15 22:26:33.204481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.107 [2024-07-15 22:26:33.204490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.107 [2024-07-15 22:26:33.204497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.107 [2024-07-15 22:26:33.208006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.108 [2024-07-15 22:26:33.217101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.108 [2024-07-15 22:26:33.218369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.108 [2024-07-15 22:26:33.218391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.108 [2024-07-15 22:26:33.218399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.108 [2024-07-15 22:26:33.218622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.108 [2024-07-15 22:26:33.218840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.108 [2024-07-15 22:26:33.218848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.108 [2024-07-15 22:26:33.218855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.108 [2024-07-15 22:26:33.222366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.108 [2024-07-15 22:26:33.231065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.108 [2024-07-15 22:26:33.231704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.108 [2024-07-15 22:26:33.231741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.108 [2024-07-15 22:26:33.231753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.108 [2024-07-15 22:26:33.231989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.108 [2024-07-15 22:26:33.232218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.108 [2024-07-15 22:26:33.232228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.108 [2024-07-15 22:26:33.232235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.108 [2024-07-15 22:26:33.235742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.108 [2024-07-15 22:26:33.244846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.108 [2024-07-15 22:26:33.245520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.108 [2024-07-15 22:26:33.245538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.108 [2024-07-15 22:26:33.245546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.108 [2024-07-15 22:26:33.245767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.108 [2024-07-15 22:26:33.245984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.108 [2024-07-15 22:26:33.245992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.108 [2024-07-15 22:26:33.245999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.108 [2024-07-15 22:26:33.249505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.108 [2024-07-15 22:26:33.258594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.108 [2024-07-15 22:26:33.259340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.108 [2024-07-15 22:26:33.259377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.108 [2024-07-15 22:26:33.259389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.108 [2024-07-15 22:26:33.259629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.108 [2024-07-15 22:26:33.259849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.108 [2024-07-15 22:26:33.259857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.108 [2024-07-15 22:26:33.259865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.108 [2024-07-15 22:26:33.263379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.108 [2024-07-15 22:26:33.272480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.108 [2024-07-15 22:26:33.273225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.108 [2024-07-15 22:26:33.273262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.108 [2024-07-15 22:26:33.273274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.108 [2024-07-15 22:26:33.273514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.108 [2024-07-15 22:26:33.273735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.108 [2024-07-15 22:26:33.273743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.108 [2024-07-15 22:26:33.273751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.108 [2024-07-15 22:26:33.277271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.108 [2024-07-15 22:26:33.286370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.108 [2024-07-15 22:26:33.287040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.108 [2024-07-15 22:26:33.287059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.108 [2024-07-15 22:26:33.287066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.108 [2024-07-15 22:26:33.287290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.108 [2024-07-15 22:26:33.287508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.108 [2024-07-15 22:26:33.287515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.108 [2024-07-15 22:26:33.287526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.108 [2024-07-15 22:26:33.291028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.108 [2024-07-15 22:26:33.300118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.108 [2024-07-15 22:26:33.300830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.108 [2024-07-15 22:26:33.300867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.108 [2024-07-15 22:26:33.300877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.108 [2024-07-15 22:26:33.301114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.108 [2024-07-15 22:26:33.301342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.108 [2024-07-15 22:26:33.301352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.108 [2024-07-15 22:26:33.301359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.108 [2024-07-15 22:26:33.304867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.108 [2024-07-15 22:26:33.313958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.108 [2024-07-15 22:26:33.314605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.108 [2024-07-15 22:26:33.314623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.108 [2024-07-15 22:26:33.314631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.108 [2024-07-15 22:26:33.314848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.108 [2024-07-15 22:26:33.315064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.108 [2024-07-15 22:26:33.315072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.108 [2024-07-15 22:26:33.315078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.108 [2024-07-15 22:26:33.318583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.108 [2024-07-15 22:26:33.327887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.108 [2024-07-15 22:26:33.328632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.108 [2024-07-15 22:26:33.328668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.108 [2024-07-15 22:26:33.328678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.108 [2024-07-15 22:26:33.328915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.108 [2024-07-15 22:26:33.329143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.108 [2024-07-15 22:26:33.329152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.108 [2024-07-15 22:26:33.329159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.108 [2024-07-15 22:26:33.332671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.108 [2024-07-15 22:26:33.341771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.108 [2024-07-15 22:26:33.342537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.108 [2024-07-15 22:26:33.342578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.108 [2024-07-15 22:26:33.342589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.108 [2024-07-15 22:26:33.342825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.108 [2024-07-15 22:26:33.343045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.108 [2024-07-15 22:26:33.343054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.108 [2024-07-15 22:26:33.343061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.108 [2024-07-15 22:26:33.346587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.108 [2024-07-15 22:26:33.355683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.108 [2024-07-15 22:26:33.356408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.108 [2024-07-15 22:26:33.356444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.108 [2024-07-15 22:26:33.356455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.108 [2024-07-15 22:26:33.356691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.108 [2024-07-15 22:26:33.356911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.108 [2024-07-15 22:26:33.356919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.108 [2024-07-15 22:26:33.356927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.108 [2024-07-15 22:26:33.360442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.108 [2024-07-15 22:26:33.369544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.108 [2024-07-15 22:26:33.370314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.108 [2024-07-15 22:26:33.370351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.108 [2024-07-15 22:26:33.370361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.108 [2024-07-15 22:26:33.370598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.108 [2024-07-15 22:26:33.370818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.108 [2024-07-15 22:26:33.370826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.108 [2024-07-15 22:26:33.370834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.108 [2024-07-15 22:26:33.374347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.108 [2024-07-15 22:26:33.383435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.108 [2024-07-15 22:26:33.384196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.108 [2024-07-15 22:26:33.384232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.108 [2024-07-15 22:26:33.384244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.108 [2024-07-15 22:26:33.384484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.108 [2024-07-15 22:26:33.384708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.108 [2024-07-15 22:26:33.384717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.108 [2024-07-15 22:26:33.384725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.108 [2024-07-15 22:26:33.388240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.108 [2024-07-15 22:26:33.397323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.108 [2024-07-15 22:26:33.398088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.108 [2024-07-15 22:26:33.398131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.108 [2024-07-15 22:26:33.398144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.108 [2024-07-15 22:26:33.398381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.108 [2024-07-15 22:26:33.398602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.108 [2024-07-15 22:26:33.398610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.108 [2024-07-15 22:26:33.398618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.108 [2024-07-15 22:26:33.402128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.108 [2024-07-15 22:26:33.411214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.108 [2024-07-15 22:26:33.411980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.108 [2024-07-15 22:26:33.412017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.108 [2024-07-15 22:26:33.412029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.108 [2024-07-15 22:26:33.412276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.108 [2024-07-15 22:26:33.412498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.108 [2024-07-15 22:26:33.412506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.108 [2024-07-15 22:26:33.412513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.108 [2024-07-15 22:26:33.416019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.108 [2024-07-15 22:26:33.425108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.108 [2024-07-15 22:26:33.425843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.108 [2024-07-15 22:26:33.425879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.108 [2024-07-15 22:26:33.425889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.108 [2024-07-15 22:26:33.426136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.108 [2024-07-15 22:26:33.426357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.108 [2024-07-15 22:26:33.426365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.108 [2024-07-15 22:26:33.426373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.108 [2024-07-15 22:26:33.429880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.369 [2024-07-15 22:26:33.438981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.369 [2024-07-15 22:26:33.439648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.369 [2024-07-15 22:26:33.439666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.369 [2024-07-15 22:26:33.439674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.369 [2024-07-15 22:26:33.439890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.369 [2024-07-15 22:26:33.440106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.369 [2024-07-15 22:26:33.440114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.369 [2024-07-15 22:26:33.440127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.369 [2024-07-15 22:26:33.443633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.369 [2024-07-15 22:26:33.452921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.369 [2024-07-15 22:26:33.453576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.369 [2024-07-15 22:26:33.453591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.369 [2024-07-15 22:26:33.453599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.369 [2024-07-15 22:26:33.453815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.369 [2024-07-15 22:26:33.454030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.369 [2024-07-15 22:26:33.454037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.369 [2024-07-15 22:26:33.454044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.369 [2024-07-15 22:26:33.457547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.370 [2024-07-15 22:26:33.466831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.370 [2024-07-15 22:26:33.467542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.370 [2024-07-15 22:26:33.467578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.370 [2024-07-15 22:26:33.467589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.370 [2024-07-15 22:26:33.467825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.370 [2024-07-15 22:26:33.468045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.370 [2024-07-15 22:26:33.468053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.370 [2024-07-15 22:26:33.468061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.370 [2024-07-15 22:26:33.471575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.370 [2024-07-15 22:26:33.480744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.370 [2024-07-15 22:26:33.481388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.370 [2024-07-15 22:26:33.481406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.370 [2024-07-15 22:26:33.481417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.370 [2024-07-15 22:26:33.481636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.370 [2024-07-15 22:26:33.481852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.370 [2024-07-15 22:26:33.481859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.370 [2024-07-15 22:26:33.481866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.370 [2024-07-15 22:26:33.485370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.370 [2024-07-15 22:26:33.494656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.370 [2024-07-15 22:26:33.495408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.370 [2024-07-15 22:26:33.495444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.370 [2024-07-15 22:26:33.495455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.370 [2024-07-15 22:26:33.495691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.370 [2024-07-15 22:26:33.495912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.370 [2024-07-15 22:26:33.495920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.370 [2024-07-15 22:26:33.495928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.370 [2024-07-15 22:26:33.499442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.370 [2024-07-15 22:26:33.508530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.370 [2024-07-15 22:26:33.509253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.370 [2024-07-15 22:26:33.509290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.370 [2024-07-15 22:26:33.509300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.370 [2024-07-15 22:26:33.509537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.370 [2024-07-15 22:26:33.509757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.370 [2024-07-15 22:26:33.509767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.370 [2024-07-15 22:26:33.509774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.370 [2024-07-15 22:26:33.513291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.370 [2024-07-15 22:26:33.522377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.370 [2024-07-15 22:26:33.523092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.370 [2024-07-15 22:26:33.523135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.370 [2024-07-15 22:26:33.523146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.370 [2024-07-15 22:26:33.523382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.370 [2024-07-15 22:26:33.523602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.370 [2024-07-15 22:26:33.523615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.370 [2024-07-15 22:26:33.523623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.370 [2024-07-15 22:26:33.527133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.370 [2024-07-15 22:26:33.536233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.370 [2024-07-15 22:26:33.536954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.370 [2024-07-15 22:26:33.536990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.370 [2024-07-15 22:26:33.537001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.370 [2024-07-15 22:26:33.537246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.370 [2024-07-15 22:26:33.537468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.370 [2024-07-15 22:26:33.537476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.370 [2024-07-15 22:26:33.537484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.370 [2024-07-15 22:26:33.540985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.370 [2024-07-15 22:26:33.550082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.370 [2024-07-15 22:26:33.550849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.370 [2024-07-15 22:26:33.550885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.370 [2024-07-15 22:26:33.550896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.370 [2024-07-15 22:26:33.551141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.370 [2024-07-15 22:26:33.551363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.370 [2024-07-15 22:26:33.551372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.370 [2024-07-15 22:26:33.551379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.370 [2024-07-15 22:26:33.554884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.370 [2024-07-15 22:26:33.563978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.370 [2024-07-15 22:26:33.564648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.370 [2024-07-15 22:26:33.564665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.370 [2024-07-15 22:26:33.564673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.370 [2024-07-15 22:26:33.564890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.370 [2024-07-15 22:26:33.565106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.370 [2024-07-15 22:26:33.565114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.370 [2024-07-15 22:26:33.565120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.370 [2024-07-15 22:26:33.568625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.370 [2024-07-15 22:26:33.577904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.370 [2024-07-15 22:26:33.578568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.370 [2024-07-15 22:26:33.578584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.370 [2024-07-15 22:26:33.578591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.370 [2024-07-15 22:26:33.578808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.370 [2024-07-15 22:26:33.579023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.370 [2024-07-15 22:26:33.579031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.370 [2024-07-15 22:26:33.579038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.370 [2024-07-15 22:26:33.582541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.370 [2024-07-15 22:26:33.591704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.370 [2024-07-15 22:26:33.592421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.370 [2024-07-15 22:26:33.592458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.370 [2024-07-15 22:26:33.592469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.370 [2024-07-15 22:26:33.592705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.370 [2024-07-15 22:26:33.592925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.370 [2024-07-15 22:26:33.592933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.370 [2024-07-15 22:26:33.592941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.370 [2024-07-15 22:26:33.596460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.370 [2024-07-15 22:26:33.605550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.370 [2024-07-15 22:26:33.606218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.370 [2024-07-15 22:26:33.606237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.370 [2024-07-15 22:26:33.606244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.370 [2024-07-15 22:26:33.606461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.371 [2024-07-15 22:26:33.606678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.371 [2024-07-15 22:26:33.606685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.371 [2024-07-15 22:26:33.606692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.371 [2024-07-15 22:26:33.610216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.371 [2024-07-15 22:26:33.619296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.371 [2024-07-15 22:26:33.620052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.371 [2024-07-15 22:26:33.620089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.371 [2024-07-15 22:26:33.620099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.371 [2024-07-15 22:26:33.620348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.371 [2024-07-15 22:26:33.620570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.371 [2024-07-15 22:26:33.620579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.371 [2024-07-15 22:26:33.620586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.371 [2024-07-15 22:26:33.624091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.371 [2024-07-15 22:26:33.633177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.371 [2024-07-15 22:26:33.633908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.371 [2024-07-15 22:26:33.633944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.371 [2024-07-15 22:26:33.633954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.371 [2024-07-15 22:26:33.634200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.371 [2024-07-15 22:26:33.634421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.371 [2024-07-15 22:26:33.634429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.371 [2024-07-15 22:26:33.634437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.371 [2024-07-15 22:26:33.637942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.371 [2024-07-15 22:26:33.647034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.371 [2024-07-15 22:26:33.647805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.371 [2024-07-15 22:26:33.647841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.371 [2024-07-15 22:26:33.647851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.371 [2024-07-15 22:26:33.648087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.371 [2024-07-15 22:26:33.648317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.371 [2024-07-15 22:26:33.648326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.371 [2024-07-15 22:26:33.648333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.371 [2024-07-15 22:26:33.651837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.371 [2024-07-15 22:26:33.660920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.371 [2024-07-15 22:26:33.661692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.371 [2024-07-15 22:26:33.661729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.371 [2024-07-15 22:26:33.661739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.371 [2024-07-15 22:26:33.661976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.371 [2024-07-15 22:26:33.662205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.371 [2024-07-15 22:26:33.662215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.371 [2024-07-15 22:26:33.662226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.371 [2024-07-15 22:26:33.665731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.371 [2024-07-15 22:26:33.674821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.371 [2024-07-15 22:26:33.675546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.371 [2024-07-15 22:26:33.675583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.371 [2024-07-15 22:26:33.675594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.371 [2024-07-15 22:26:33.675830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.371 [2024-07-15 22:26:33.676050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.371 [2024-07-15 22:26:33.676058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.371 [2024-07-15 22:26:33.676066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.371 [2024-07-15 22:26:33.679581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.371 [2024-07-15 22:26:33.688670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.371 [2024-07-15 22:26:33.689434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.371 [2024-07-15 22:26:33.689470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.371 [2024-07-15 22:26:33.689482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.371 [2024-07-15 22:26:33.689719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.371 [2024-07-15 22:26:33.689939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.371 [2024-07-15 22:26:33.689948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.371 [2024-07-15 22:26:33.689955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.632 [2024-07-15 22:26:33.693472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.632 [2024-07-15 22:26:33.702565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.632 [2024-07-15 22:26:33.703307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-07-15 22:26:33.703344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.632 [2024-07-15 22:26:33.703354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.632 [2024-07-15 22:26:33.703591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.632 [2024-07-15 22:26:33.703811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.632 [2024-07-15 22:26:33.703820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.632 [2024-07-15 22:26:33.703827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.632 [2024-07-15 22:26:33.707341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.632 [2024-07-15 22:26:33.716430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.632 [2024-07-15 22:26:33.717192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-07-15 22:26:33.717233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.632 [2024-07-15 22:26:33.717246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.632 [2024-07-15 22:26:33.717485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.632 [2024-07-15 22:26:33.717706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.632 [2024-07-15 22:26:33.717714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.632 [2024-07-15 22:26:33.717721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.632 [2024-07-15 22:26:33.721236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.632 [2024-07-15 22:26:33.730323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.632 [2024-07-15 22:26:33.730952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-07-15 22:26:33.730970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.632 [2024-07-15 22:26:33.730978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.632 [2024-07-15 22:26:33.731201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.632 [2024-07-15 22:26:33.731418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.632 [2024-07-15 22:26:33.731426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.632 [2024-07-15 22:26:33.731432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.632 [2024-07-15 22:26:33.734933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.632 [2024-07-15 22:26:33.744255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.632 [2024-07-15 22:26:33.744922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.632 [2024-07-15 22:26:33.744938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.632 [2024-07-15 22:26:33.744945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.632 [2024-07-15 22:26:33.745167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.632 [2024-07-15 22:26:33.745384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.632 [2024-07-15 22:26:33.745392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.633 [2024-07-15 22:26:33.745399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.633 [2024-07-15 22:26:33.748900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.633 [2024-07-15 22:26:33.758197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.633 [2024-07-15 22:26:33.758848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-07-15 22:26:33.758863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.633 [2024-07-15 22:26:33.758871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.633 [2024-07-15 22:26:33.759087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.633 [2024-07-15 22:26:33.759312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.633 [2024-07-15 22:26:33.759321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.633 [2024-07-15 22:26:33.759327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.633 [2024-07-15 22:26:33.762827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.633 [2024-07-15 22:26:33.772123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.633 [2024-07-15 22:26:33.772780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-07-15 22:26:33.772796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.633 [2024-07-15 22:26:33.772803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.633 [2024-07-15 22:26:33.773019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.633 [2024-07-15 22:26:33.773240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.633 [2024-07-15 22:26:33.773248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.633 [2024-07-15 22:26:33.773255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.633 [2024-07-15 22:26:33.776752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.633 [2024-07-15 22:26:33.786035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.633 [2024-07-15 22:26:33.786727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-07-15 22:26:33.786742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.633 [2024-07-15 22:26:33.786749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.633 [2024-07-15 22:26:33.786965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.633 [2024-07-15 22:26:33.787185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.633 [2024-07-15 22:26:33.787193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.633 [2024-07-15 22:26:33.787200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.633 [2024-07-15 22:26:33.790701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.633 [2024-07-15 22:26:33.799795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.633 [2024-07-15 22:26:33.800418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-07-15 22:26:33.800436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.633 [2024-07-15 22:26:33.800443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.633 [2024-07-15 22:26:33.800659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.633 [2024-07-15 22:26:33.800875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.633 [2024-07-15 22:26:33.800882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.633 [2024-07-15 22:26:33.800889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.633 [2024-07-15 22:26:33.804394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.633 [2024-07-15 22:26:33.813677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.633 [2024-07-15 22:26:33.814328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-07-15 22:26:33.814344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.633 [2024-07-15 22:26:33.814351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.633 [2024-07-15 22:26:33.814567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.633 [2024-07-15 22:26:33.814782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.633 [2024-07-15 22:26:33.814789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.633 [2024-07-15 22:26:33.814796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.633 [2024-07-15 22:26:33.818299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.633 [2024-07-15 22:26:33.827581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.633 [2024-07-15 22:26:33.828305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-07-15 22:26:33.828341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.633 [2024-07-15 22:26:33.828351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.633 [2024-07-15 22:26:33.828587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.633 [2024-07-15 22:26:33.828808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.633 [2024-07-15 22:26:33.828816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.633 [2024-07-15 22:26:33.828824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.633 [2024-07-15 22:26:33.832340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.633 [2024-07-15 22:26:33.841429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.633 [2024-07-15 22:26:33.842193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-07-15 22:26:33.842230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.633 [2024-07-15 22:26:33.842242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.633 [2024-07-15 22:26:33.842482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.633 [2024-07-15 22:26:33.842702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.633 [2024-07-15 22:26:33.842710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.633 [2024-07-15 22:26:33.842717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.633 [2024-07-15 22:26:33.846237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.633 [2024-07-15 22:26:33.855324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.633 [2024-07-15 22:26:33.856041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-07-15 22:26:33.856078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.633 [2024-07-15 22:26:33.856094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.633 [2024-07-15 22:26:33.856343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.633 [2024-07-15 22:26:33.856565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.633 [2024-07-15 22:26:33.856573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.633 [2024-07-15 22:26:33.856581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.633 [2024-07-15 22:26:33.860084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.633 [2024-07-15 22:26:33.869171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.633 [2024-07-15 22:26:33.869832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-07-15 22:26:33.869869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.633 [2024-07-15 22:26:33.869879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.633 [2024-07-15 22:26:33.870116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.633 [2024-07-15 22:26:33.870345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.633 [2024-07-15 22:26:33.870354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.633 [2024-07-15 22:26:33.870361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.633 [2024-07-15 22:26:33.873866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.633 [2024-07-15 22:26:33.882953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.633 [2024-07-15 22:26:33.883615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-07-15 22:26:33.883633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.633 [2024-07-15 22:26:33.883641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.633 [2024-07-15 22:26:33.883858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.633 [2024-07-15 22:26:33.884074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.633 [2024-07-15 22:26:33.884081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.633 [2024-07-15 22:26:33.884088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.633 [2024-07-15 22:26:33.887591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.633 [2024-07-15 22:26:33.896877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.633 [2024-07-15 22:26:33.897535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-07-15 22:26:33.897551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.634 [2024-07-15 22:26:33.897559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.634 [2024-07-15 22:26:33.897775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.634 [2024-07-15 22:26:33.897990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.634 [2024-07-15 22:26:33.898004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.634 [2024-07-15 22:26:33.898011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.634 [2024-07-15 22:26:33.901514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.634 [2024-07-15 22:26:33.910799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.634 [2024-07-15 22:26:33.911516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-07-15 22:26:33.911553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.634 [2024-07-15 22:26:33.911563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.634 [2024-07-15 22:26:33.911799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.634 [2024-07-15 22:26:33.912019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.634 [2024-07-15 22:26:33.912028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.634 [2024-07-15 22:26:33.912035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.634 [2024-07-15 22:26:33.915548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.634 [2024-07-15 22:26:33.924633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.634 [2024-07-15 22:26:33.925323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-07-15 22:26:33.925360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.634 [2024-07-15 22:26:33.925370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.634 [2024-07-15 22:26:33.925606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.634 [2024-07-15 22:26:33.925826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.634 [2024-07-15 22:26:33.925835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.634 [2024-07-15 22:26:33.925842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.634 [2024-07-15 22:26:33.929358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.634 [2024-07-15 22:26:33.938448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.634 [2024-07-15 22:26:33.939207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-07-15 22:26:33.939244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.634 [2024-07-15 22:26:33.939256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.634 [2024-07-15 22:26:33.939493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.634 [2024-07-15 22:26:33.939714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.634 [2024-07-15 22:26:33.939723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.634 [2024-07-15 22:26:33.939730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.634 [2024-07-15 22:26:33.943253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.634 [2024-07-15 22:26:33.952350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.634 [2024-07-15 22:26:33.953119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.634 [2024-07-15 22:26:33.953163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.634 [2024-07-15 22:26:33.953174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.634 [2024-07-15 22:26:33.953414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.634 [2024-07-15 22:26:33.953635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.634 [2024-07-15 22:26:33.953643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.634 [2024-07-15 22:26:33.953650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.895 [2024-07-15 22:26:33.957167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.895 [2024-07-15 22:26:33.966267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.895 [2024-07-15 22:26:33.966922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.895 [2024-07-15 22:26:33.966959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.895 [2024-07-15 22:26:33.966969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.895 [2024-07-15 22:26:33.967214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.895 [2024-07-15 22:26:33.967436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.895 [2024-07-15 22:26:33.967444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.895 [2024-07-15 22:26:33.967451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.895 [2024-07-15 22:26:33.970957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.895 [2024-07-15 22:26:33.980041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.895 [2024-07-15 22:26:33.980810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.895 [2024-07-15 22:26:33.980847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.895 [2024-07-15 22:26:33.980857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.895 [2024-07-15 22:26:33.981093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.895 [2024-07-15 22:26:33.981323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.895 [2024-07-15 22:26:33.981333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.895 [2024-07-15 22:26:33.981340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.895 [2024-07-15 22:26:33.984845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.895 [2024-07-15 22:26:33.993952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.895 [2024-07-15 22:26:33.994725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.895 [2024-07-15 22:26:33.994762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.895 [2024-07-15 22:26:33.994773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.895 [2024-07-15 22:26:33.995013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.895 [2024-07-15 22:26:33.995241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.895 [2024-07-15 22:26:33.995251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.895 [2024-07-15 22:26:33.995259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.895 [2024-07-15 22:26:33.998771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.895 [2024-07-15 22:26:34.007876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.895 [2024-07-15 22:26:34.008602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.895 [2024-07-15 22:26:34.008638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.895 [2024-07-15 22:26:34.008649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.895 [2024-07-15 22:26:34.008886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.895 [2024-07-15 22:26:34.009107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.895 [2024-07-15 22:26:34.009115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.895 [2024-07-15 22:26:34.009133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.895 [2024-07-15 22:26:34.012640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.895 [2024-07-15 22:26:34.021736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.895 [2024-07-15 22:26:34.022378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.895 [2024-07-15 22:26:34.022399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.895 [2024-07-15 22:26:34.022407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.895 [2024-07-15 22:26:34.022623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.895 [2024-07-15 22:26:34.022841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.895 [2024-07-15 22:26:34.022848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.895 [2024-07-15 22:26:34.022855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.895 [2024-07-15 22:26:34.026543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.895 [2024-07-15 22:26:34.035643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.895 [2024-07-15 22:26:34.036387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.895 [2024-07-15 22:26:34.036424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.895 [2024-07-15 22:26:34.036435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.896 [2024-07-15 22:26:34.036671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.896 [2024-07-15 22:26:34.036891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.896 [2024-07-15 22:26:34.036900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.896 [2024-07-15 22:26:34.036911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.896 [2024-07-15 22:26:34.040425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.896 [2024-07-15 22:26:34.049531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.896 [2024-07-15 22:26:34.050330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.896 [2024-07-15 22:26:34.050367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.896 [2024-07-15 22:26:34.050377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.896 [2024-07-15 22:26:34.050614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.896 [2024-07-15 22:26:34.050835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.896 [2024-07-15 22:26:34.050843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.896 [2024-07-15 22:26:34.050850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.896 [2024-07-15 22:26:34.054363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.896 [2024-07-15 22:26:34.063452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.896 [2024-07-15 22:26:34.064221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.896 [2024-07-15 22:26:34.064258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.896 [2024-07-15 22:26:34.064270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.896 [2024-07-15 22:26:34.064507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.896 [2024-07-15 22:26:34.064727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.896 [2024-07-15 22:26:34.064736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.896 [2024-07-15 22:26:34.064743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.896 [2024-07-15 22:26:34.068258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.896 [2024-07-15 22:26:34.077353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.896 [2024-07-15 22:26:34.077982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.896 [2024-07-15 22:26:34.078000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.896 [2024-07-15 22:26:34.078008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.896 [2024-07-15 22:26:34.078231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.896 [2024-07-15 22:26:34.078449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.896 [2024-07-15 22:26:34.078456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.896 [2024-07-15 22:26:34.078463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.896 [2024-07-15 22:26:34.081964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.896 [2024-07-15 22:26:34.091262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.896 [2024-07-15 22:26:34.091996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.896 [2024-07-15 22:26:34.092037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.896 [2024-07-15 22:26:34.092048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.896 [2024-07-15 22:26:34.092293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.896 [2024-07-15 22:26:34.092514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.896 [2024-07-15 22:26:34.092523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.896 [2024-07-15 22:26:34.092530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.896 [2024-07-15 22:26:34.096034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.896 [2024-07-15 22:26:34.105126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.896 [2024-07-15 22:26:34.105890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.896 [2024-07-15 22:26:34.105926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.896 [2024-07-15 22:26:34.105937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.896 [2024-07-15 22:26:34.106182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.896 [2024-07-15 22:26:34.106404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.896 [2024-07-15 22:26:34.106412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.896 [2024-07-15 22:26:34.106420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.896 [2024-07-15 22:26:34.109924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.896 [2024-07-15 22:26:34.119009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.896 [2024-07-15 22:26:34.119756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.896 [2024-07-15 22:26:34.119792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.896 [2024-07-15 22:26:34.119803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.896 [2024-07-15 22:26:34.120039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.896 [2024-07-15 22:26:34.120269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.896 [2024-07-15 22:26:34.120279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.896 [2024-07-15 22:26:34.120286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.896 [2024-07-15 22:26:34.123790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.896 [2024-07-15 22:26:34.132874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.896 [2024-07-15 22:26:34.133621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.896 [2024-07-15 22:26:34.133657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.896 [2024-07-15 22:26:34.133667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.896 [2024-07-15 22:26:34.133904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.896 [2024-07-15 22:26:34.134137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.896 [2024-07-15 22:26:34.134147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.896 [2024-07-15 22:26:34.134155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.896 [2024-07-15 22:26:34.137661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.896 [2024-07-15 22:26:34.146752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.896 [2024-07-15 22:26:34.147499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.896 [2024-07-15 22:26:34.147536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.896 [2024-07-15 22:26:34.147547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.896 [2024-07-15 22:26:34.147783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.896 [2024-07-15 22:26:34.148004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.896 [2024-07-15 22:26:34.148012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.896 [2024-07-15 22:26:34.148019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.896 [2024-07-15 22:26:34.151533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.896 [2024-07-15 22:26:34.160630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.896 [2024-07-15 22:26:34.161300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.896 [2024-07-15 22:26:34.161318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.896 [2024-07-15 22:26:34.161326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.896 [2024-07-15 22:26:34.161543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.896 [2024-07-15 22:26:34.161759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.896 [2024-07-15 22:26:34.161767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.896 [2024-07-15 22:26:34.161774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.896 [2024-07-15 22:26:34.165287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.896 [2024-07-15 22:26:34.174369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.896 [2024-07-15 22:26:34.175079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.896 [2024-07-15 22:26:34.175116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.896 [2024-07-15 22:26:34.175136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.896 [2024-07-15 22:26:34.175373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.896 [2024-07-15 22:26:34.175593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.896 [2024-07-15 22:26:34.175601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.896 [2024-07-15 22:26:34.175609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.896 [2024-07-15 22:26:34.179118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.896 [2024-07-15 22:26:34.188206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.896 [2024-07-15 22:26:34.188970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.896 [2024-07-15 22:26:34.189006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.897 [2024-07-15 22:26:34.189017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.897 [2024-07-15 22:26:34.189262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.897 [2024-07-15 22:26:34.189483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.897 [2024-07-15 22:26:34.189491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.897 [2024-07-15 22:26:34.189499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.897 [2024-07-15 22:26:34.193008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.897 [2024-07-15 22:26:34.202094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.897 [2024-07-15 22:26:34.202860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.897 [2024-07-15 22:26:34.202897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.897 [2024-07-15 22:26:34.202908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.897 [2024-07-15 22:26:34.203153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.897 [2024-07-15 22:26:34.203374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.897 [2024-07-15 22:26:34.203382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.897 [2024-07-15 22:26:34.203390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.897 [2024-07-15 22:26:34.206893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.897 [2024-07-15 22:26:34.215983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.897 [2024-07-15 22:26:34.216684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.897 [2024-07-15 22:26:34.216721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:08.897 [2024-07-15 22:26:34.216731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:08.897 [2024-07-15 22:26:34.216968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:08.897 [2024-07-15 22:26:34.217199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.897 [2024-07-15 22:26:34.217208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.897 [2024-07-15 22:26:34.217216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.158 [2024-07-15 22:26:34.220724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.158 [2024-07-15 22:26:34.229822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.158 [2024-07-15 22:26:34.230529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-07-15 22:26:34.230566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.158 [2024-07-15 22:26:34.230581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.158 [2024-07-15 22:26:34.230817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.158 [2024-07-15 22:26:34.231038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.158 [2024-07-15 22:26:34.231046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.158 [2024-07-15 22:26:34.231054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.158 [2024-07-15 22:26:34.234567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.158 [2024-07-15 22:26:34.243665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.158 [2024-07-15 22:26:34.244382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-07-15 22:26:34.244418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.158 [2024-07-15 22:26:34.244429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.158 [2024-07-15 22:26:34.244666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.158 [2024-07-15 22:26:34.244886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.158 [2024-07-15 22:26:34.244896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.158 [2024-07-15 22:26:34.244903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.158 [2024-07-15 22:26:34.248418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.158 [2024-07-15 22:26:34.257509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.158 [2024-07-15 22:26:34.258166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-07-15 22:26:34.258202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.158 [2024-07-15 22:26:34.258213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.258449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.258669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.258677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.258684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.159 [2024-07-15 22:26:34.262200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.159 [2024-07-15 22:26:34.271283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.159 [2024-07-15 22:26:34.272048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-07-15 22:26:34.272084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.159 [2024-07-15 22:26:34.272094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.272340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.272561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.272574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.272581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.159 [2024-07-15 22:26:34.276087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.159 [2024-07-15 22:26:34.285178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.159 [2024-07-15 22:26:34.285939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-07-15 22:26:34.285976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.159 [2024-07-15 22:26:34.285986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.286231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.286452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.286461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.286468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.159 [2024-07-15 22:26:34.289978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.159 [2024-07-15 22:26:34.299080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.159 [2024-07-15 22:26:34.299851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-07-15 22:26:34.299888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.159 [2024-07-15 22:26:34.299898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.300143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.300364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.300373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.300380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.159 [2024-07-15 22:26:34.303885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.159 [2024-07-15 22:26:34.312972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.159 [2024-07-15 22:26:34.313706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-07-15 22:26:34.313742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.159 [2024-07-15 22:26:34.313753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.313989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.314217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.314227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.314234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.159 [2024-07-15 22:26:34.317741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.159 [2024-07-15 22:26:34.326823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.159 [2024-07-15 22:26:34.327596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-07-15 22:26:34.327633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.159 [2024-07-15 22:26:34.327644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.327880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.328100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.328109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.328116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.159 [2024-07-15 22:26:34.331639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.159 [2024-07-15 22:26:34.340745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.159 [2024-07-15 22:26:34.341473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-07-15 22:26:34.341510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.159 [2024-07-15 22:26:34.341521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.341758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.341978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.341986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.341994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.159 [2024-07-15 22:26:34.345518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.159 [2024-07-15 22:26:34.354617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.159 [2024-07-15 22:26:34.355392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-07-15 22:26:34.355429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.159 [2024-07-15 22:26:34.355439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.355676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.355896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.355905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.355912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.159 [2024-07-15 22:26:34.359422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.159 [2024-07-15 22:26:34.368531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.159 [2024-07-15 22:26:34.369384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-07-15 22:26:34.369423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.159 [2024-07-15 22:26:34.369434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.369675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.369896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.369904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.369912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.159 [2024-07-15 22:26:34.373424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.159 [2024-07-15 22:26:34.382315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.159 [2024-07-15 22:26:34.383049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-07-15 22:26:34.383086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.159 [2024-07-15 22:26:34.383098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.383346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.383567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.383575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.383583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.159 [2024-07-15 22:26:34.387094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.159 [2024-07-15 22:26:34.396203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.159 [2024-07-15 22:26:34.396861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-07-15 22:26:34.396879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.159 [2024-07-15 22:26:34.396886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.397103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.397327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.397335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.397342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.159 [2024-07-15 22:26:34.400848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.159 [2024-07-15 22:26:34.410152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.159 [2024-07-15 22:26:34.410777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-07-15 22:26:34.410792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.159 [2024-07-15 22:26:34.410800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.411016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.411238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.411246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.411257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.159 [2024-07-15 22:26:34.414761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.159 [2024-07-15 22:26:34.424067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.159 [2024-07-15 22:26:34.424761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-07-15 22:26:34.424776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.159 [2024-07-15 22:26:34.424783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.424999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.425221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.425229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.425235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.159 [2024-07-15 22:26:34.428740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.159 [2024-07-15 22:26:34.437907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.159 [2024-07-15 22:26:34.438556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-07-15 22:26:34.438571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.159 [2024-07-15 22:26:34.438578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.438794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.439010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.439018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.439024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.159 [2024-07-15 22:26:34.442571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.159 [2024-07-15 22:26:34.451675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.159 [2024-07-15 22:26:34.452344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-07-15 22:26:34.452361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.159 [2024-07-15 22:26:34.452368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.452584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.452800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.452808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.452814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.159 [2024-07-15 22:26:34.456323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.159 [2024-07-15 22:26:34.465625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.159 [2024-07-15 22:26:34.466449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-07-15 22:26:34.466490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.159 [2024-07-15 22:26:34.466502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.466740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.466960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.466968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.466976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.159 [2024-07-15 22:26:34.470489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.159 [2024-07-15 22:26:34.479378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.159 [2024-07-15 22:26:34.480105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-07-15 22:26:34.480149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.159 [2024-07-15 22:26:34.480161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.159 [2024-07-15 22:26:34.480399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.159 [2024-07-15 22:26:34.480619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.159 [2024-07-15 22:26:34.480628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.159 [2024-07-15 22:26:34.480636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.421 [2024-07-15 22:26:34.484148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.421 [2024-07-15 22:26:34.493244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.421 [2024-07-15 22:26:34.493931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-07-15 22:26:34.493950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.421 [2024-07-15 22:26:34.493957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.421 [2024-07-15 22:26:34.494181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.421 [2024-07-15 22:26:34.494399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.421 [2024-07-15 22:26:34.494406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.421 [2024-07-15 22:26:34.494413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.421 [2024-07-15 22:26:34.497913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2954536 Killed "${NVMF_APP[@]}" "$@" 00:29:09.421 22:26:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:09.421 22:26:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:09.421 22:26:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:09.421 22:26:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:09.421 22:26:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.421 [2024-07-15 22:26:34.507004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.421 [2024-07-15 22:26:34.507729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-07-15 22:26:34.507766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.421 [2024-07-15 22:26:34.507777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.421 [2024-07-15 22:26:34.508013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.421 [2024-07-15 22:26:34.508242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.421 [2024-07-15 22:26:34.508252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.421 [2024-07-15 22:26:34.508260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.421 22:26:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2956131 00:29:09.421 22:26:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2956131 00:29:09.421 22:26:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:09.421 22:26:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2956131 ']' 00:29:09.421 22:26:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.421 22:26:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:09.421 22:26:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.421 [2024-07-15 22:26:34.511769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.421 22:26:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:09.421 22:26:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.421 [2024-07-15 22:26:34.520869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.421 [2024-07-15 22:26:34.521530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-07-15 22:26:34.521548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.421 [2024-07-15 22:26:34.521556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.421 [2024-07-15 22:26:34.521773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.421 [2024-07-15 22:26:34.521990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.421 [2024-07-15 22:26:34.521998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.421 [2024-07-15 22:26:34.522005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.421 [2024-07-15 22:26:34.525515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.421 [2024-07-15 22:26:34.534819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.421 [2024-07-15 22:26:34.535545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-07-15 22:26:34.535584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.421 [2024-07-15 22:26:34.535594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.422 [2024-07-15 22:26:34.535830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.422 [2024-07-15 22:26:34.536051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.422 [2024-07-15 22:26:34.536063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.422 [2024-07-15 22:26:34.536071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.422 [2024-07-15 22:26:34.539591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.422 [2024-07-15 22:26:34.548701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.422 [2024-07-15 22:26:34.549456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-07-15 22:26:34.549492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.422 [2024-07-15 22:26:34.549503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.422 [2024-07-15 22:26:34.549739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.422 [2024-07-15 22:26:34.549960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.422 [2024-07-15 22:26:34.549969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.422 [2024-07-15 22:26:34.549977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.422 [2024-07-15 22:26:34.553493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.422 [2024-07-15 22:26:34.560098] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:29:09.422 [2024-07-15 22:26:34.560148] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.422 [2024-07-15 22:26:34.562590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.422 [2024-07-15 22:26:34.563441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-07-15 22:26:34.563477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.422 [2024-07-15 22:26:34.563488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.422 [2024-07-15 22:26:34.563731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.422 [2024-07-15 22:26:34.563953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.422 [2024-07-15 22:26:34.563962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.422 [2024-07-15 22:26:34.563969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.422 [2024-07-15 22:26:34.567485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.422 [2024-07-15 22:26:34.576378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.422 [2024-07-15 22:26:34.577022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-07-15 22:26:34.577059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.422 [2024-07-15 22:26:34.577070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.422 [2024-07-15 22:26:34.577314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.422 [2024-07-15 22:26:34.577536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.422 [2024-07-15 22:26:34.577552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.422 [2024-07-15 22:26:34.577560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.422 [2024-07-15 22:26:34.581067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.422 [2024-07-15 22:26:34.590173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.422 [2024-07-15 22:26:34.590853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-07-15 22:26:34.590871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.422 [2024-07-15 22:26:34.590879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.422 [2024-07-15 22:26:34.591096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.422 [2024-07-15 22:26:34.591319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.422 [2024-07-15 22:26:34.591327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.422 [2024-07-15 22:26:34.591334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.422 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.422 [2024-07-15 22:26:34.594840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.422 [2024-07-15 22:26:34.603935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.422 [2024-07-15 22:26:34.604669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-07-15 22:26:34.604706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.422 [2024-07-15 22:26:34.604716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.422 [2024-07-15 22:26:34.604953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.422 [2024-07-15 22:26:34.605182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.422 [2024-07-15 22:26:34.605192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.422 [2024-07-15 22:26:34.605199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.422 [2024-07-15 22:26:34.608705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.422 [2024-07-15 22:26:34.617804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.422 [2024-07-15 22:26:34.618468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-07-15 22:26:34.618504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.422 [2024-07-15 22:26:34.618515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.422 [2024-07-15 22:26:34.618751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.422 [2024-07-15 22:26:34.618971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.422 [2024-07-15 22:26:34.618980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.422 [2024-07-15 22:26:34.618988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.422 [2024-07-15 22:26:34.622583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.422 [2024-07-15 22:26:34.631697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.422 [2024-07-15 22:26:34.632476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-07-15 22:26:34.632513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.422 [2024-07-15 22:26:34.632523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.422 [2024-07-15 22:26:34.632760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.422 [2024-07-15 22:26:34.632980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.422 [2024-07-15 22:26:34.632988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.422 [2024-07-15 22:26:34.632996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.422 [2024-07-15 22:26:34.636514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.422 [2024-07-15 22:26:34.640096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:09.422 [2024-07-15 22:26:34.645621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.422 [2024-07-15 22:26:34.646423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-07-15 22:26:34.646462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.422 [2024-07-15 22:26:34.646472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.423 [2024-07-15 22:26:34.646710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.423 [2024-07-15 22:26:34.646931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.423 [2024-07-15 22:26:34.646939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.423 [2024-07-15 22:26:34.646947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.423 [2024-07-15 22:26:34.650463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.423 [2024-07-15 22:26:34.659562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.423 [2024-07-15 22:26:34.660103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-07-15 22:26:34.660132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.423 [2024-07-15 22:26:34.660143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.423 [2024-07-15 22:26:34.660363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.423 [2024-07-15 22:26:34.660579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.423 [2024-07-15 22:26:34.660587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.423 [2024-07-15 22:26:34.660595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.423 [2024-07-15 22:26:34.664101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.423 [2024-07-15 22:26:34.673401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.423 [2024-07-15 22:26:34.674063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-07-15 22:26:34.674080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.423 [2024-07-15 22:26:34.674093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.423 [2024-07-15 22:26:34.674316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.423 [2024-07-15 22:26:34.674533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.423 [2024-07-15 22:26:34.674541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.423 [2024-07-15 22:26:34.674547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.423 [2024-07-15 22:26:34.678046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.423 [2024-07-15 22:26:34.687347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.423 [2024-07-15 22:26:34.688134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-07-15 22:26:34.688171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.423 [2024-07-15 22:26:34.688181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.423 [2024-07-15 22:26:34.688419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.423 [2024-07-15 22:26:34.688639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.423 [2024-07-15 22:26:34.688648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.423 [2024-07-15 22:26:34.688655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.423 [2024-07-15 22:26:34.692165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.423 [2024-07-15 22:26:34.694094] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.423 [2024-07-15 22:26:34.694118] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.423 [2024-07-15 22:26:34.694129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.423 [2024-07-15 22:26:34.694134] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.423 [2024-07-15 22:26:34.694138] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.423 [2024-07-15 22:26:34.694172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:09.423 [2024-07-15 22:26:34.694347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.423 [2024-07-15 22:26:34.694348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:09.423 [2024-07-15 22:26:34.701269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.423 [2024-07-15 22:26:34.702066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-07-15 22:26:34.702104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.423 [2024-07-15 22:26:34.702115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.423 [2024-07-15 22:26:34.702360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.423 [2024-07-15 22:26:34.702582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.423 [2024-07-15 22:26:34.702590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.423 [2024-07-15 22:26:34.702598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.423 [2024-07-15 22:26:34.706105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.423 [2024-07-15 22:26:34.715219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.423 [2024-07-15 22:26:34.716002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-07-15 22:26:34.716040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.423 [2024-07-15 22:26:34.716051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.423 [2024-07-15 22:26:34.716296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.423 [2024-07-15 22:26:34.716518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.423 [2024-07-15 22:26:34.716527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.423 [2024-07-15 22:26:34.716535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.423 [2024-07-15 22:26:34.720043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.423 [2024-07-15 22:26:34.729149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.423 [2024-07-15 22:26:34.729735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-07-15 22:26:34.729773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.423 [2024-07-15 22:26:34.729784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.423 [2024-07-15 22:26:34.730025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.423 [2024-07-15 22:26:34.730254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.423 [2024-07-15 22:26:34.730264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.423 [2024-07-15 22:26:34.730272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.423 [2024-07-15 22:26:34.733781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.423 [2024-07-15 22:26:34.743080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.423 [2024-07-15 22:26:34.743697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-07-15 22:26:34.743734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.423 [2024-07-15 22:26:34.743747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.423 [2024-07-15 22:26:34.743985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.423 [2024-07-15 22:26:34.744225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.423 [2024-07-15 22:26:34.744235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.423 [2024-07-15 22:26:34.744243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.686 [2024-07-15 22:26:34.747751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.686 [2024-07-15 22:26:34.756845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.686 [2024-07-15 22:26:34.757520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-07-15 22:26:34.757557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.686 [2024-07-15 22:26:34.757573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.686 [2024-07-15 22:26:34.757810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.686 [2024-07-15 22:26:34.758030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.686 [2024-07-15 22:26:34.758039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.686 [2024-07-15 22:26:34.758046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.686 [2024-07-15 22:26:34.761560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.686 [2024-07-15 22:26:34.770668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.686 [2024-07-15 22:26:34.771422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-07-15 22:26:34.771459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.686 [2024-07-15 22:26:34.771469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.686 [2024-07-15 22:26:34.771706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.686 [2024-07-15 22:26:34.771927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.686 [2024-07-15 22:26:34.771936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.686 [2024-07-15 22:26:34.771943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.686 [2024-07-15 22:26:34.775457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.686 [2024-07-15 22:26:34.784555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.686 [2024-07-15 22:26:34.785131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-07-15 22:26:34.785168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.686 [2024-07-15 22:26:34.785180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.686 [2024-07-15 22:26:34.785418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.686 [2024-07-15 22:26:34.785638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.686 [2024-07-15 22:26:34.785647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.686 [2024-07-15 22:26:34.785655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.686 [2024-07-15 22:26:34.789163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.686 [2024-07-15 22:26:34.798468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.686 [2024-07-15 22:26:34.799215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-07-15 22:26:34.799252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.686 [2024-07-15 22:26:34.799264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.686 [2024-07-15 22:26:34.799504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.686 [2024-07-15 22:26:34.799730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.686 [2024-07-15 22:26:34.799743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.686 [2024-07-15 22:26:34.799751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.686 [2024-07-15 22:26:34.803268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.686 [2024-07-15 22:26:34.812364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.686 [2024-07-15 22:26:34.813176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-07-15 22:26:34.813213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.686 [2024-07-15 22:26:34.813225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.686 [2024-07-15 22:26:34.813464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.686 [2024-07-15 22:26:34.813684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.686 [2024-07-15 22:26:34.813693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.686 [2024-07-15 22:26:34.813700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.686 [2024-07-15 22:26:34.817217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.686 [2024-07-15 22:26:34.826111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.686 [2024-07-15 22:26:34.826858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-07-15 22:26:34.826895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.686 [2024-07-15 22:26:34.826907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.686 [2024-07-15 22:26:34.827155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.686 [2024-07-15 22:26:34.827376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.686 [2024-07-15 22:26:34.827384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.686 [2024-07-15 22:26:34.827392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.686 [2024-07-15 22:26:34.830898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.686 [2024-07-15 22:26:34.839995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.686 [2024-07-15 22:26:34.840654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-07-15 22:26:34.840673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.686 [2024-07-15 22:26:34.840680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.686 [2024-07-15 22:26:34.840897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.686 [2024-07-15 22:26:34.841113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.686 [2024-07-15 22:26:34.841121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.686 [2024-07-15 22:26:34.841134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.686 [2024-07-15 22:26:34.844644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.686 [2024-07-15 22:26:34.853940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.686 [2024-07-15 22:26:34.854466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-07-15 22:26:34.854502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.686 [2024-07-15 22:26:34.854513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.686 [2024-07-15 22:26:34.854750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.686 [2024-07-15 22:26:34.854970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.686 [2024-07-15 22:26:34.854979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.686 [2024-07-15 22:26:34.854986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.686 [2024-07-15 22:26:34.858500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.686 [2024-07-15 22:26:34.867799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.686 [2024-07-15 22:26:34.868551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-07-15 22:26:34.868589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.686 [2024-07-15 22:26:34.868600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.686 [2024-07-15 22:26:34.868837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.686 [2024-07-15 22:26:34.869057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.686 [2024-07-15 22:26:34.869066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.686 [2024-07-15 22:26:34.869074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.686 [2024-07-15 22:26:34.872590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.686 [2024-07-15 22:26:34.881688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.686 [2024-07-15 22:26:34.882434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-07-15 22:26:34.882471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.686 [2024-07-15 22:26:34.882482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.686 [2024-07-15 22:26:34.882718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.686 [2024-07-15 22:26:34.882938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.686 [2024-07-15 22:26:34.882947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.686 [2024-07-15 22:26:34.882954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.686 [2024-07-15 22:26:34.886471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.686 [2024-07-15 22:26:34.895570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.686 [2024-07-15 22:26:34.896411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-07-15 22:26:34.896448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.686 [2024-07-15 22:26:34.896459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.686 [2024-07-15 22:26:34.896700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.686 [2024-07-15 22:26:34.896920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.687 [2024-07-15 22:26:34.896929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.687 [2024-07-15 22:26:34.896936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.687 [2024-07-15 22:26:34.900450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.687 [2024-07-15 22:26:34.909344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.687 [2024-07-15 22:26:34.910136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-07-15 22:26:34.910172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.687 [2024-07-15 22:26:34.910184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.687 [2024-07-15 22:26:34.910422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.687 [2024-07-15 22:26:34.910642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.687 [2024-07-15 22:26:34.910650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.687 [2024-07-15 22:26:34.910658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.687 [2024-07-15 22:26:34.914169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.687 [2024-07-15 22:26:34.923264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.687 [2024-07-15 22:26:34.923987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-07-15 22:26:34.924024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.687 [2024-07-15 22:26:34.924034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.687 [2024-07-15 22:26:34.924278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.687 [2024-07-15 22:26:34.924500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.687 [2024-07-15 22:26:34.924509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.687 [2024-07-15 22:26:34.924517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.687 [2024-07-15 22:26:34.928025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.687 [2024-07-15 22:26:34.937120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.687 [2024-07-15 22:26:34.937909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-07-15 22:26:34.937946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.687 [2024-07-15 22:26:34.937957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.687 [2024-07-15 22:26:34.938200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.687 [2024-07-15 22:26:34.938421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.687 [2024-07-15 22:26:34.938430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.687 [2024-07-15 22:26:34.938441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.687 [2024-07-15 22:26:34.941950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.687 [2024-07-15 22:26:34.951060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.687 [2024-07-15 22:26:34.951824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-07-15 22:26:34.951861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.687 [2024-07-15 22:26:34.951871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.687 [2024-07-15 22:26:34.952108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.687 [2024-07-15 22:26:34.952337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.687 [2024-07-15 22:26:34.952346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.687 [2024-07-15 22:26:34.952353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.687 [2024-07-15 22:26:34.955859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.687 [2024-07-15 22:26:34.964967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.687 [2024-07-15 22:26:34.965597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-07-15 22:26:34.965634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.687 [2024-07-15 22:26:34.965645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.687 [2024-07-15 22:26:34.965882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.687 [2024-07-15 22:26:34.966102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.687 [2024-07-15 22:26:34.966111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.687 [2024-07-15 22:26:34.966118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.687 [2024-07-15 22:26:34.969634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.687 [2024-07-15 22:26:34.978730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.687 [2024-07-15 22:26:34.979503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-07-15 22:26:34.979540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.687 [2024-07-15 22:26:34.979551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.687 [2024-07-15 22:26:34.979787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.687 [2024-07-15 22:26:34.980007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.687 [2024-07-15 22:26:34.980016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.687 [2024-07-15 22:26:34.980024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.687 [2024-07-15 22:26:34.983539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.687 [2024-07-15 22:26:34.992636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.687 [2024-07-15 22:26:34.993416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-07-15 22:26:34.993457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.687 [2024-07-15 22:26:34.993468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.687 [2024-07-15 22:26:34.993704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.687 [2024-07-15 22:26:34.993924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.687 [2024-07-15 22:26:34.993933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.687 [2024-07-15 22:26:34.993940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.687 [2024-07-15 22:26:34.997453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.687 [2024-07-15 22:26:35.006554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.687 [2024-07-15 22:26:35.007139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-07-15 22:26:35.007174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.687 [2024-07-15 22:26:35.007186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.687 [2024-07-15 22:26:35.007425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.687 [2024-07-15 22:26:35.007644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.687 [2024-07-15 22:26:35.007652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.687 [2024-07-15 22:26:35.007660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.950 [2024-07-15 22:26:35.011171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.950 [2024-07-15 22:26:35.020472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.950 [2024-07-15 22:26:35.021359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.950 [2024-07-15 22:26:35.021397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.950 [2024-07-15 22:26:35.021408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.950 [2024-07-15 22:26:35.021644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.950 [2024-07-15 22:26:35.021864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.950 [2024-07-15 22:26:35.021873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.950 [2024-07-15 22:26:35.021880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.950 [2024-07-15 22:26:35.025616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.950 [2024-07-15 22:26:35.034310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.950 [2024-07-15 22:26:35.035105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.950 [2024-07-15 22:26:35.035149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.950 [2024-07-15 22:26:35.035160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.950 [2024-07-15 22:26:35.035396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.950 [2024-07-15 22:26:35.035621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.950 [2024-07-15 22:26:35.035630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.950 [2024-07-15 22:26:35.035637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.950 [2024-07-15 22:26:35.039150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.950 [2024-07-15 22:26:35.048254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.950 [2024-07-15 22:26:35.049038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.950 [2024-07-15 22:26:35.049075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.950 [2024-07-15 22:26:35.049087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.950 [2024-07-15 22:26:35.049332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.950 [2024-07-15 22:26:35.049554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.950 [2024-07-15 22:26:35.049569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.950 [2024-07-15 22:26:35.049576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.950 [2024-07-15 22:26:35.053085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.950 [2024-07-15 22:26:35.062182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.950 [2024-07-15 22:26:35.062817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.951 [2024-07-15 22:26:35.062835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.951 [2024-07-15 22:26:35.062843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.951 [2024-07-15 22:26:35.063060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.951 [2024-07-15 22:26:35.063283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.951 [2024-07-15 22:26:35.063292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.951 [2024-07-15 22:26:35.063298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.951 [2024-07-15 22:26:35.066798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.951 [2024-07-15 22:26:35.076091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.951 [2024-07-15 22:26:35.076810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.951 [2024-07-15 22:26:35.076847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.951 [2024-07-15 22:26:35.076859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.951 [2024-07-15 22:26:35.077096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.951 [2024-07-15 22:26:35.077324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.951 [2024-07-15 22:26:35.077333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.951 [2024-07-15 22:26:35.077341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.951 [2024-07-15 22:26:35.080845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.951 [2024-07-15 22:26:35.089943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.951 [2024-07-15 22:26:35.090646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.951 [2024-07-15 22:26:35.090664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.951 [2024-07-15 22:26:35.090672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.951 [2024-07-15 22:26:35.090890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.951 [2024-07-15 22:26:35.091106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.951 [2024-07-15 22:26:35.091114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.951 [2024-07-15 22:26:35.091121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.951 [2024-07-15 22:26:35.094630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.951 [2024-07-15 22:26:35.103716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.951 [2024-07-15 22:26:35.104441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.951 [2024-07-15 22:26:35.104478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.951 [2024-07-15 22:26:35.104490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.951 [2024-07-15 22:26:35.104730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.951 [2024-07-15 22:26:35.104949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.951 [2024-07-15 22:26:35.104958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.951 [2024-07-15 22:26:35.104966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.951 [2024-07-15 22:26:35.108480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.951 [2024-07-15 22:26:35.117574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.951 [2024-07-15 22:26:35.118382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.951 [2024-07-15 22:26:35.118419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.951 [2024-07-15 22:26:35.118430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.951 [2024-07-15 22:26:35.118666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.951 [2024-07-15 22:26:35.118887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.951 [2024-07-15 22:26:35.118895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.951 [2024-07-15 22:26:35.118902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.951 [2024-07-15 22:26:35.122413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.951 [2024-07-15 22:26:35.131504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.951 [2024-07-15 22:26:35.132320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.951 [2024-07-15 22:26:35.132357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.951 [2024-07-15 22:26:35.132373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.951 [2024-07-15 22:26:35.132613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.951 [2024-07-15 22:26:35.132834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.951 [2024-07-15 22:26:35.132842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.951 [2024-07-15 22:26:35.132849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.951 [2024-07-15 22:26:35.136366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.951 [2024-07-15 22:26:35.145263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.951 [2024-07-15 22:26:35.145990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.951 [2024-07-15 22:26:35.146027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.951 [2024-07-15 22:26:35.146038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.951 [2024-07-15 22:26:35.146281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.951 [2024-07-15 22:26:35.146503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.951 [2024-07-15 22:26:35.146511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.951 [2024-07-15 22:26:35.146519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.951 [2024-07-15 22:26:35.150023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.951 [2024-07-15 22:26:35.159120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.951 [2024-07-15 22:26:35.159890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.951 [2024-07-15 22:26:35.159927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.951 [2024-07-15 22:26:35.159937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.951 [2024-07-15 22:26:35.160180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.951 [2024-07-15 22:26:35.160401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.951 [2024-07-15 22:26:35.160410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.951 [2024-07-15 22:26:35.160417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.951 [2024-07-15 22:26:35.163930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.951 [2024-07-15 22:26:35.173029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.951 [2024-07-15 22:26:35.173768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.951 [2024-07-15 22:26:35.173805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.951 [2024-07-15 22:26:35.173816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.951 [2024-07-15 22:26:35.174053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.951 [2024-07-15 22:26:35.174281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.951 [2024-07-15 22:26:35.174295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.951 [2024-07-15 22:26:35.174303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.951 [2024-07-15 22:26:35.177809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.951 [2024-07-15 22:26:35.186901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.951 [2024-07-15 22:26:35.187604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.951 [2024-07-15 22:26:35.187641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.951 [2024-07-15 22:26:35.187651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.951 [2024-07-15 22:26:35.187887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.951 [2024-07-15 22:26:35.188108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.951 [2024-07-15 22:26:35.188116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.951 [2024-07-15 22:26:35.188131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.951 [2024-07-15 22:26:35.191637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.951 [2024-07-15 22:26:35.200732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.951 [2024-07-15 22:26:35.201514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.951 [2024-07-15 22:26:35.201551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.951 [2024-07-15 22:26:35.201562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.951 [2024-07-15 22:26:35.201798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.951 [2024-07-15 22:26:35.202019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.951 [2024-07-15 22:26:35.202027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.951 [2024-07-15 22:26:35.202034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.952 [2024-07-15 22:26:35.205547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.952 [2024-07-15 22:26:35.214642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.952 [2024-07-15 22:26:35.215426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.952 [2024-07-15 22:26:35.215463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.952 [2024-07-15 22:26:35.215474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.952 [2024-07-15 22:26:35.215711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.952 [2024-07-15 22:26:35.215931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.952 [2024-07-15 22:26:35.215939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.952 [2024-07-15 22:26:35.215947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.952 [2024-07-15 22:26:35.219462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.952 [2024-07-15 22:26:35.228556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.952 [2024-07-15 22:26:35.229344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.952 [2024-07-15 22:26:35.229380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.952 [2024-07-15 22:26:35.229392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.952 [2024-07-15 22:26:35.229628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.952 [2024-07-15 22:26:35.229848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.952 [2024-07-15 22:26:35.229857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.952 [2024-07-15 22:26:35.229864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.952 [2024-07-15 22:26:35.233379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.952 [2024-07-15 22:26:35.242474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.952 [2024-07-15 22:26:35.243241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.952 [2024-07-15 22:26:35.243278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.952 [2024-07-15 22:26:35.243290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.952 [2024-07-15 22:26:35.243530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.952 [2024-07-15 22:26:35.243751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.952 [2024-07-15 22:26:35.243759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.952 [2024-07-15 22:26:35.243766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.952 [2024-07-15 22:26:35.247288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.952 [2024-07-15 22:26:35.256381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.952 [2024-07-15 22:26:35.256945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.952 [2024-07-15 22:26:35.256962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.952 [2024-07-15 22:26:35.256970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.952 [2024-07-15 22:26:35.257192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.952 [2024-07-15 22:26:35.257409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.952 [2024-07-15 22:26:35.257417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.952 [2024-07-15 22:26:35.257424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.952 [2024-07-15 22:26:35.260924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.952 [2024-07-15 22:26:35.270222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.952 [2024-07-15 22:26:35.270959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.952 [2024-07-15 22:26:35.270995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:09.952 [2024-07-15 22:26:35.271007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:09.952 [2024-07-15 22:26:35.271261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:09.952 [2024-07-15 22:26:35.271483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.952 [2024-07-15 22:26:35.271492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.952 [2024-07-15 22:26:35.271499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.215 [2024-07-15 22:26:35.275004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.215 [2024-07-15 22:26:35.284094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.215 [2024-07-15 22:26:35.284546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.215 [2024-07-15 22:26:35.284583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:10.215 [2024-07-15 22:26:35.284596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:10.215 [2024-07-15 22:26:35.284836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:10.215 [2024-07-15 22:26:35.285056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.215 [2024-07-15 22:26:35.285065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.215 [2024-07-15 22:26:35.285072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.215 [2024-07-15 22:26:35.288585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.215 [2024-07-15 22:26:35.297886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.215 [2024-07-15 22:26:35.298526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.215 [2024-07-15 22:26:35.298544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:10.215 [2024-07-15 22:26:35.298552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:10.215 [2024-07-15 22:26:35.298769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:10.215 [2024-07-15 22:26:35.298985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.215 [2024-07-15 22:26:35.298993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.215 [2024-07-15 22:26:35.298999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.215 [2024-07-15 22:26:35.302504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.215 [2024-07-15 22:26:35.311798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.215 [2024-07-15 22:26:35.312401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.215 [2024-07-15 22:26:35.312438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:10.215 [2024-07-15 22:26:35.312449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:10.215 [2024-07-15 22:26:35.312686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:10.215 [2024-07-15 22:26:35.312905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.215 [2024-07-15 22:26:35.312914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.215 [2024-07-15 22:26:35.312925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.215 [2024-07-15 22:26:35.316437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.215 [2024-07-15 22:26:35.325737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.215 [2024-07-15 22:26:35.326526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.215 [2024-07-15 22:26:35.326563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:10.215 [2024-07-15 22:26:35.326573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:10.215 [2024-07-15 22:26:35.326810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:10.215 [2024-07-15 22:26:35.327030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.215 [2024-07-15 22:26:35.327038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.215 [2024-07-15 22:26:35.327045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.215 [2024-07-15 22:26:35.330558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.215 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:10.215 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:10.215 22:26:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:10.215 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:10.215 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.215 [2024-07-15 22:26:35.339655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.215 [2024-07-15 22:26:35.340087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.215 [2024-07-15 22:26:35.340105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:10.215 [2024-07-15 22:26:35.340113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:10.215 [2024-07-15 22:26:35.340337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:10.215 [2024-07-15 22:26:35.340554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.215 [2024-07-15 22:26:35.340563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.215 [2024-07-15 22:26:35.340569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.215 [2024-07-15 22:26:35.344071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.215 [2024-07-15 22:26:35.353589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.215 [2024-07-15 22:26:35.354437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.215 [2024-07-15 22:26:35.354474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:10.215 [2024-07-15 22:26:35.354484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:10.215 [2024-07-15 22:26:35.354721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:10.215 [2024-07-15 22:26:35.354941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.215 [2024-07-15 22:26:35.354950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.215 [2024-07-15 22:26:35.354958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.215 [2024-07-15 22:26:35.358480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.215 [2024-07-15 22:26:35.367383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.215 [2024-07-15 22:26:35.368132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.215 [2024-07-15 22:26:35.368169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:10.215 [2024-07-15 22:26:35.368181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:10.215 [2024-07-15 22:26:35.368419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:10.215 [2024-07-15 22:26:35.368639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.215 [2024-07-15 22:26:35.368648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.215 [2024-07-15 22:26:35.368655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.215 [2024-07-15 22:26:35.372164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.215 22:26:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.215 22:26:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:10.215 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.215 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.215 [2024-07-15 22:26:35.380514] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.215 [2024-07-15 22:26:35.381259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.215 [2024-07-15 22:26:35.382013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.215 [2024-07-15 22:26:35.382050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:10.215 [2024-07-15 22:26:35.382060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:10.215 [2024-07-15 22:26:35.382305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:10.215 [2024-07-15 22:26:35.382526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.215 [2024-07-15 22:26:35.382534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.215 [2024-07-15 22:26:35.382542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.215 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.215 22:26:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:10.215 [2024-07-15 22:26:35.386046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.215 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.215 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.215 [2024-07-15 22:26:35.395140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.215 [2024-07-15 22:26:35.395926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.215 [2024-07-15 22:26:35.395963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:10.215 [2024-07-15 22:26:35.395973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:10.215 [2024-07-15 22:26:35.396217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:10.215 [2024-07-15 22:26:35.396444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.215 [2024-07-15 22:26:35.396452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.215 [2024-07-15 22:26:35.396460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.215 [2024-07-15 22:26:35.399964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.216 [2024-07-15 22:26:35.409062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.216 [2024-07-15 22:26:35.409854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.216 [2024-07-15 22:26:35.409891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:10.216 [2024-07-15 22:26:35.409901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:10.216 [2024-07-15 22:26:35.410147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:10.216 [2024-07-15 22:26:35.410368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.216 [2024-07-15 22:26:35.410376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.216 [2024-07-15 22:26:35.410384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.216 [2024-07-15 22:26:35.413891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.216 Malloc0 00:29:10.216 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.216 22:26:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:10.216 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.216 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.216 [2024-07-15 22:26:35.422981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.216 [2024-07-15 22:26:35.423725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.216 [2024-07-15 22:26:35.423762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:10.216 [2024-07-15 22:26:35.423773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:10.216 [2024-07-15 22:26:35.424010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:10.216 [2024-07-15 22:26:35.424237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.216 [2024-07-15 22:26:35.424246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.216 [2024-07-15 22:26:35.424254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.216 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.216 22:26:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:10.216 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.216 [2024-07-15 22:26:35.427763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.216 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.216 [2024-07-15 22:26:35.436854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.216 [2024-07-15 22:26:35.437599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.216 [2024-07-15 22:26:35.437636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc253b0 with addr=10.0.0.2, port=4420 00:29:10.216 [2024-07-15 22:26:35.437651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc253b0 is same with the state(5) to be set 00:29:10.216 [2024-07-15 22:26:35.437888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc253b0 (9): Bad file descriptor 00:29:10.216 [2024-07-15 22:26:35.438108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.216 [2024-07-15 22:26:35.438116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.216 [2024-07-15 22:26:35.438134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.216 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.216 22:26:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.216 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.216 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.216 [2024-07-15 22:26:35.441641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.216 [2024-07-15 22:26:35.446089] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.216 [2024-07-15 22:26:35.450741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.216 22:26:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.216 22:26:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2955050 00:29:10.216 [2024-07-15 22:26:35.482064] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:20.226 00:29:20.226 Latency(us) 00:29:20.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.226 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:20.226 Verification LBA range: start 0x0 length 0x4000 00:29:20.226 Nvme1n1 : 15.01 8791.00 34.34 9752.41 0.00 6877.47 1071.79 15073.28 00:29:20.226 =================================================================================================================== 00:29:20.226 Total : 8791.00 34.34 9752.41 0.00 6877.47 1071.79 15073.28 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:20.226 rmmod nvme_tcp 00:29:20.226 rmmod nvme_fabrics 00:29:20.226 rmmod nvme_keyring 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2956131 ']' 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2956131 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2956131 ']' 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2956131 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2956131 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2956131' 00:29:20.226 killing process with pid 2956131 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2956131 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2956131 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:20.226 22:26:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.169 22:26:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:21.169 00:29:21.169 real 0m27.780s 00:29:21.169 user 1m2.836s 00:29:21.169 sys 0m7.252s 00:29:21.169 22:26:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:21.169 22:26:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:21.169 ************************************ 00:29:21.169 END TEST nvmf_bdevperf 00:29:21.169 ************************************ 00:29:21.169 22:26:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:21.169 22:26:46 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:21.169 22:26:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:21.169 22:26:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:21.169 22:26:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:21.169 ************************************ 00:29:21.169 START TEST nvmf_target_disconnect 00:29:21.169 ************************************ 00:29:21.169 22:26:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:21.431 * Looking for test storage... 00:29:21.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:21.431 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:21.432 22:26:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:29.579 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:29.579 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:29.579 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:29.579 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:29.579 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:29.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:29:29.580 00:29:29.580 --- 10.0.0.2 ping statistics --- 00:29:29.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.580 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:29.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:29:29.580 00:29:29.580 --- 10.0.0.1 ping statistics --- 00:29:29.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.580 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:29.580 ************************************ 00:29:29.580 START TEST nvmf_target_disconnect_tc1 00:29:29.580 ************************************ 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:29.580 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.580 [2024-07-15 22:26:53.845209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 22:26:53.845286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9ee20 with addr=10.0.0.2, port=4420 00:29:29.580 [2024-07-15 22:26:53.845323] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:29.580 [2024-07-15 22:26:53.845340] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:29.580 [2024-07-15 22:26:53.845348] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:29.580 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:29.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:29.580 Initializing NVMe Controllers 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:29.580 00:29:29.580 real 0m0.111s 00:29:29.580 user 0m0.041s 00:29:29.580 sys 0m0.069s 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:29.580 ************************************ 00:29:29.580 END TEST nvmf_target_disconnect_tc1 00:29:29.580 ************************************ 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:29.580 ************************************ 00:29:29.580 START TEST nvmf_target_disconnect_tc2 00:29:29.580 ************************************ 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2962113 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2962113 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2962113 ']' 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.580 22:26:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:29.580 [2024-07-15 22:26:53.987720] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:29:29.580 [2024-07-15 22:26:53.987781] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.580 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.580 [2024-07-15 22:26:54.077332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:29.580 [2024-07-15 22:26:54.172374] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.580 [2024-07-15 22:26:54.172429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.580 [2024-07-15 22:26:54.172441] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:29.580 [2024-07-15 22:26:54.172448] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:29.580 [2024-07-15 22:26:54.172453] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.580 [2024-07-15 22:26:54.172625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:29.580 [2024-07-15 22:26:54.172762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:29.580 [2024-07-15 22:26:54.172897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:29.580 [2024-07-15 22:26:54.172898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:29.580 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.581 Malloc0 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.581 [2024-07-15 22:26:54.847997] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.581 [2024-07-15 22:26:54.888345] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.581 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.843 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.843 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2962456 00:29:29.843 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:29.843 22:26:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:29.843 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.767 22:26:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2962113 00:29:31.767 22:26:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Write completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Write completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Write completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Write completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Write completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Write completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Read completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 Write completed with error (sct=0, sc=8) 00:29:31.767 starting I/O failed 00:29:31.767 [2024-07-15 22:26:56.920489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.767 [2024-07-15 22:26:56.920857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.920873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.921398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.921426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.921886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.921895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.922356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.922383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.922682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.922690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.923099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.923107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.923362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.923390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.923728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.923737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.924342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.924370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.924805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.924814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.925334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.925362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.925675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.925684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.926129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.926137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.926325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.926333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.926623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.926633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.927002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.927009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.927319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.927326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.927742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.927749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.928129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.767 [2024-07-15 22:26:56.928138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.767 qpair failed and we were unable to recover it. 00:29:31.767 [2024-07-15 22:26:56.928632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.928639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.929032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.929039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.929391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.929398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.929810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.929816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.930236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.930242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.930600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.930606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.930994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.931000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.931507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.931514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.931899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.931905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.932412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.932440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.932914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.932922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.933409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.933436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.933871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.933880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.934300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.934328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.934733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.934742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.935170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.935179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.935588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.935595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.935983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.935989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.936372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.936379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.936779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.936786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.937211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.937219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.937659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.937665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.938181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.938188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.938599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.938605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.938927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.938934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.939360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.939367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.939683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.939690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.940093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.940100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.940425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.940431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.940894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.940900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.941393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.941421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.941770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.941778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.942128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.942135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.942566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.942572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.942970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.942976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.943497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.943530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.943927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.943936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.944435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.944462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.944833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.944841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.945379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.768 [2024-07-15 22:26:56.945406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.768 qpair failed and we were unable to recover it. 00:29:31.768 [2024-07-15 22:26:56.945870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.945879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.946411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.946438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.946767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.946775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.947201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.947208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.947558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.947566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.947989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.947995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.948208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.948215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.948685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.948691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.949083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.949089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.949503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.949509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.949901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.949908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.950376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.950404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.950834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.950842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.951147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.951155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.951300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.951309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.951640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.951647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.951967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.951975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.952252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.952259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.952669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.952676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.953109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.953116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.953580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.953587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.954038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.954046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.954414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.954421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.954844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.954851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.955292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.955299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.955685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.955692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.956085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.956091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.956479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.956486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.956917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.956924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.957341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.957369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.957731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.957740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.958175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.958183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.958597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.958603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.958999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.959005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.959417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.959424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.959858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.959864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.960348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.960375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.960782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.960790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.961256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.961263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.961651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.961658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.769 qpair failed and we were unable to recover it. 00:29:31.769 [2024-07-15 22:26:56.961987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.769 [2024-07-15 22:26:56.961994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.962403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.962410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.962811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.962817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.963339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.963366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.963767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.963775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.963980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.963990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.964399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.964406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.964797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.964803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.965317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.965344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.965783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.965791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.966098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.966105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.966482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.966489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.966924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.966932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.967421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.967449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.967890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.967898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.968411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.968438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.968882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.968891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.969392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.969419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.969908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.969916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.970352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.970379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.970791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.970799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.971003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.971012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.971326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.971337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.971675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.971682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.971995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.972002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.972482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.972489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.972889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.972895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.973362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.973389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.973760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.973768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.974051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.974057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.974453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.974460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.974866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.974872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.975226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.975233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.975650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.975656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.976083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.976090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.976489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.976496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.976957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.976964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.977494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.977522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.977932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.977940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.978333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.978361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.770 [2024-07-15 22:26:56.978781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.770 [2024-07-15 22:26:56.978789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.770 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.978992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.979001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.979424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.979431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.979895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.979901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.980392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.980420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.980699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.980708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.981114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.981127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.981516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.981523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.981953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.981959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.982357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.982384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.982789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.982797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.983110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.983117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.983479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.983486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.983846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.983853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.984376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.984403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.984803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.984812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.985330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.985357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.985759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.985767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.986157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.986165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.986551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.986558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.986971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.986977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.987374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.987381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.987732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.987742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.988165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.988172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.988503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.988510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.988992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.988998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.989392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.989399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.989804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.989810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.990248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.990255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.990688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.990695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.991091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.991098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.991488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.991495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-07-15 22:26:56.991784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.771 [2024-07-15 22:26:56.991791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.992196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.992203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.992406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.992417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.992825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.992833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.993272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.993279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.993672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.993678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.994063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.994070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.994530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.994537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.994924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.994931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.995329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.995336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.995621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.995628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.996069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.996077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.996487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.996495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.996808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.996815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.997199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.997206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.997595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.997601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.997989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.997995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.998293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.998300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.998693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.998699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.999129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.999136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.999560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:56.999566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:56.999996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.000003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.000424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.000432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.000838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.000846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.001394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.001421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.001873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.001881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.002379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.002407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.002846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.002854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.003349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.003377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.003777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.003786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.004089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.004100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.004509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.004516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.004973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.004980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.005411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.005438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.005884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.005893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.006333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.006360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.006790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.006798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.007297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.007324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.007792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.007800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.008102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.008109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-07-15 22:26:57.008485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.772 [2024-07-15 22:26:57.008492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.008708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.008718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.009144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.009152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.009654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.009660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.010087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.010094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.010524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.010531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.010913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.010920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.011376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.011404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.011723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.011732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.012074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.012081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.012380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.012388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.012796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.012802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.013282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.013289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.013687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.013693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.014087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.014093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.014502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.014508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.014795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.014802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.015185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.015192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.015554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.015561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.015951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.015958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.016365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.016372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.016760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.016766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.017151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.017157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.017575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.017581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.017968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.017974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.018285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.018293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.018693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.018699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.018991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.018999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.019411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.019418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.019813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.019820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.020353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.020384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.020784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.020792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.021235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.021242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.021634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.021640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.022032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.022038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.022442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.022448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.022896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.022903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.023343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.023350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.023787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.023794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.024358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.024389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.024796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.773 [2024-07-15 22:26:57.024804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-07-15 22:26:57.025198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.025206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.025582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.025588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.025959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.025966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.026467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.026474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.026862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.026868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.027352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.027379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.027785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.027793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.028182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.028190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.028607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.028613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.029054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.029060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.029479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.029486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.029749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.029756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.030164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.030170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.030547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.030554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.030940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.030947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.031274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.031280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.031686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.031692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.032105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.032111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.032523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.032530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.032946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.032953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.033463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.033490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.033910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.033918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.034426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.034454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.034860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.034869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.035403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.035430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.035909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.035917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.036302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.036329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.036745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.036753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.037159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.037167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.037571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.037581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.037790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.037800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.038177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.038185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.038590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.038596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.038983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.038989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.039382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.039388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.039774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.039780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.039990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.039999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.040376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.040383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.040769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.040776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.041169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.041176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.774 qpair failed and we were unable to recover it. 00:29:31.774 [2024-07-15 22:26:57.041507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.774 [2024-07-15 22:26:57.041514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.041951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.041958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.042164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.042174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.042392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.042399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.042710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.042717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.043119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.043129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.043580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.043586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.044016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.044023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.044446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.044453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.044853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.044859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.045245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.045252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.045645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.045652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.045841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.045848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.046224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.046231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.046658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.046665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.047060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.047068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.047548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.047555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.047978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.047985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.048384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.048391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.048808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.048815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.049350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.049378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.049775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.049783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.050180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.050188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.050604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.050611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.051043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.051050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.051518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.051525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.051917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.051924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.052313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.052320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.052725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.052732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.053170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.053181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.053590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.053597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.053980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.053987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.054380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.054386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.054772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.054778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.055168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.775 [2024-07-15 22:26:57.055175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.775 qpair failed and we were unable to recover it. 00:29:31.775 [2024-07-15 22:26:57.055553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.055560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.055960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.055967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.056238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.056246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.056534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.056541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.056948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.056954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.057384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.057391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.057782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.057788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.058200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.058207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.058619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.058626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.059013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.059019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.059502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.059508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.059806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.059813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.060242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.060249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.060666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.060673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.060981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.060987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.061385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.061392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.061782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.061788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.062177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.062183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.062599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.062605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.062998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.063004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.063430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.063437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.063866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.063872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.064390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.064417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.064825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.064834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.065353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.065381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.065781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.065789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.066175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.066182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.066577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.066584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.066994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.067000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.067306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.067313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.067600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.067607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.067991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.067997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.068432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.068439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.068832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.068838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.069337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.069367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.069770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.069779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.070172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.070179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.070596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.070603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.071021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.071028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.071324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.776 [2024-07-15 22:26:57.071331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.776 qpair failed and we were unable to recover it. 00:29:31.776 [2024-07-15 22:26:57.071734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.071740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.072130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.072137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.072575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.072582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.073007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.073013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.073442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.073448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.073878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.073885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.074294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.074301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.074715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.074722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.075103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.075110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.075524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.075531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.075938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.075944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.076154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.076165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.076584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.076591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.076982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.076988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.077467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.077494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.077932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.077940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.078443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.078470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.078931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.078939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.079458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.079485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.079933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.079941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.080471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.080499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.080901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.080909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.081396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.081423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.081877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.081886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.082435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.082462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.082767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.082775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.083287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.083314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.083748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.083756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:31.777 [2024-07-15 22:26:57.084151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.777 [2024-07-15 22:26:57.084158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:31.777 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.084606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.084613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.085000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.085007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.085305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.085312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.085693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.085699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.085983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.085990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.086368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.086378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.086769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.086776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.087282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.087310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.087726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.087734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.088168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.088176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.088567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.088573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.088960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.088966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.089394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.089400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.089889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.089895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.090388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.090415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.090849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.090857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.091379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.091407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.091822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.091831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.092247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.092254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.092668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.092675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.093085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.093092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.093450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.093457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.093863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.093869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.094368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.094396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.094838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.094847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.095402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.095429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.095520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.095529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.095824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.095832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.096226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.096233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.096495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.096503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.096911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.096918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.097325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.097332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.097737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.097744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.098130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.098137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-07-15 22:26:57.098352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 22:26:57.098361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.098778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.098784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.099170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.099177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.099550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.099557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.099936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.099943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.100258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.100265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.100657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.100664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.101051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.101057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.101524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.101531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.101717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.101725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.102119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.102130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.102551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.102560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.102946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.102952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.103475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.103502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.103933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.103942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.104365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.104394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.104725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.104733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.105170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.105177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.105558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.105572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.105980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.105986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.106375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.106382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.106824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.106830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.107113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.107119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.107557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.107563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.108001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.108008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.108393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.108400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.108815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.108822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.109343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.109370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.109772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.109780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.110247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.110254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.110657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.110663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.111051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.111057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.111430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.111436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.111841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.111847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.112269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.112276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.112769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.112776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.113172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.113179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.113639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.113645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.114077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.114083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.114483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-07-15 22:26:57.114490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-07-15 22:26:57.114884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.114891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.115377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.115404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.115833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.115842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.116253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.116261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.116475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.116485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.116899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.116906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.117316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.117322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.117772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.117779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.118167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.118173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.118549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.118555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.118840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.118847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.118992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.119002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.119403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.119409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.119718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.119725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.120161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.120168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.120552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.120558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.120982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.120988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.121400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.121407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.121792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.121798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.122211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.122218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.122637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.122644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.123029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.123035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.123342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.123349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.123776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.123784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.124191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.124198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.124292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.124300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.124591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.124598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.124991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.124998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.125431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.125438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.125822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.125829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.126263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.126270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.126662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.126668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.127086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.127092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.127582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.127588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.127970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.127978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.128483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.128511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.128945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.128954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.129458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.129486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.129890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.129898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.130393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.130421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-07-15 22:26:57.130830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.051 [2024-07-15 22:26:57.130838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.131327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.131355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.131787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.131795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.132266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.132273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.132657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.132663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.132866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.132875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.133299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.133306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.133690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.133696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.134082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.134089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.134490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.134498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.134910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.134917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.135361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.135371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.135755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.135762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.136147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.136154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.136536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.136543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.136932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.136938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.137354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.137361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.137664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.137671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.138079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.138085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.138468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.138475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.138859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.138866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.139252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.139259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.139684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.139691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.140116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.140127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.140519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.140525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.140917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.140923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.141470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.141497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.141905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.141914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.142416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.142443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.142873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.142881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.143382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.143409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.143801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.143809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.144320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.144347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.144748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.144756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.145191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.145198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.145579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.145586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.145994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.146000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.146434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.146441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.146871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.146879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.147314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.147343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-07-15 22:26:57.147762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.052 [2024-07-15 22:26:57.147771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.148201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.148209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.148595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.148602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.148994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.149001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.149394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.149401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.149807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.149813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.150322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.150350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.150751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.150759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.151190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.151198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.151617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.151624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.152025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.152031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.152481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.152492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.152869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.152875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.152968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.152977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.153199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.153206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.153616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.153623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.154052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.154058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.154456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.154463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.154847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.154854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.155282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.155290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.155702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.155709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.156141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.156149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.156559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.156565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.156992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.156999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.157402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.157410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.157805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.157813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.158130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.158138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.158541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.158547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.158843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.158850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.159256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.159265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.159654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.159660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.159868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.159875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-07-15 22:26:57.160186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-07-15 22:26:57.160193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.160572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.160578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.161052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.161059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.161514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.161521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.161958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.161964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.162369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.162375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.162762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.162768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.163158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.163165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.163493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.163500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.163917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.163923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.164344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.164352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.164744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.164751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.165171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.165178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.165598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.165604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.165992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.165998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.166281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.166288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.166578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.166584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.166988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.166995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.167412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.167418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.167808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.167814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.168354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.168381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.168782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.168790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.169175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.169182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.169379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.169388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.169777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.169784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.170186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.170192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.170605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.170612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.171019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.171026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.171446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.171453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.171621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.171629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.171911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.171919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.172133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.172139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.172551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.172558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.172955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.172962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.173373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.173380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.173767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.173773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.174188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.174194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.174501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.174508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.174925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.174931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.175137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.175145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.175599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-07-15 22:26:57.175606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.054 qpair failed and we were unable to recover it. 00:29:32.054 [2024-07-15 22:26:57.176006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.176012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.176209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.176216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.176619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.176625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.177018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.177025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.177226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.177233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.177541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.177550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.177812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.177819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.178228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.178235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.178687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.178694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.179107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.179114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.179536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.179544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.179863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.179870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.180363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.180370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.180752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.180759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.181052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.181058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.181451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.181458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.181846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.181852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.182135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.182142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.182457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.182464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.182873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.182879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.183265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.183272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.183695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.183702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.184108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.184114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.184522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.184529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.184921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.184928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.185464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.185491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.185946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.185955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.186473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.186500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.186990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.186999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.187422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.187450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.187548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.187557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.187932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.187939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.188305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.188312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.188730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.188737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.189129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.189136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.189396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.189402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.189820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.189827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.190212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.190218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.190630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.190636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.055 qpair failed and we were unable to recover it. 00:29:32.055 [2024-07-15 22:26:57.190936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.055 [2024-07-15 22:26:57.190942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.191350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.191358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.191767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.191773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.192085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.192092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.192512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.192519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.192914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.192920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.193454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.193485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.193785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.193794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.194162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.194169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.194583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.194589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.194992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.194998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.195450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.195457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.195732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.195740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.196048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.196055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.196327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.196334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.196667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.196674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.197084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.197090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.197411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.197418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.197827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.197833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.198135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.198143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.198451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.198458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.198856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.198863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.199273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.199280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.199686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.199693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.200100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.200106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.200487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.200493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.200905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.200911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.201292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.201299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.201710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.201716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.202103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.202109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.202501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.202507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.202902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.202910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.203426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.203454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.203922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.203930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.204149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.204161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.204558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.204565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.204956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.204963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.205358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.205365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.205791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.205797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.206312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.206339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.206785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.056 [2024-07-15 22:26:57.206793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.056 qpair failed and we were unable to recover it. 00:29:32.056 [2024-07-15 22:26:57.207095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.207103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.207529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.207537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.207949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.207956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.208377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.208405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.208838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.208846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.209368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.209399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.209866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.209874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.210365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.210392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.210847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.210855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.211128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.211135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.211625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.211632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.211913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.211919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.212454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.212481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.212884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.212892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.213088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.213095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.213599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.213627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.214082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.214090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.214602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.214629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.215036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.215044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.215471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.215479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.215880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.215888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.216099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.216108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.216487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.216494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.216773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.216779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.217204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.217210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.217607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.217613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.218033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.218040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.218457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.218464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.218779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.218785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.219203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.219209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.219602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.219608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.220030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.220036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.057 [2024-07-15 22:26:57.220459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.057 [2024-07-15 22:26:57.220465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.057 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.220862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.220868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.221187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.221194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.221599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.221605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.222000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.222007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.222209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.222217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.222596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.222603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.223008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.223014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.223309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.223316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.223754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.223760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.224034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.224040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.224425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.224432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.224820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.224827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.224923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.224933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.225348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.225354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.225778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.225784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.226178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.226185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.226606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.226612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.226998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.227004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.227310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.227318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.227707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.227714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.228099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.228105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.228502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.228509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.228900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.228907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.229420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.229448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.229852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.229860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.230352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.230379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.230788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.230796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.231193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.231200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.231616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.231622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.232037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.232044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.232425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.232433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.232754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.232760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.233175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.233181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.233598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.233604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.234017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.234023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.234441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.234447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.234879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.234885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.235289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.235295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.235739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.235745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.236061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.058 [2024-07-15 22:26:57.236067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.058 qpair failed and we were unable to recover it. 00:29:32.058 [2024-07-15 22:26:57.236327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.236337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.236719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.236726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.237159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.237166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.237575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.237581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.238013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.238020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.238438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.238445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.238835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.238841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.239235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.239248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.239656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.239662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.240087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.240093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.240491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.240497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.240885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.240892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.241302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.241312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.241629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.241636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.242039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.242046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.242500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.242507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.242899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.242905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.243221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.243229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.243557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.243564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.243992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.243998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.244300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.244307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.244726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.244732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.245071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.245077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.245487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.245494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.245927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.245933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.246429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.246456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.246859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.246867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.247394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.247421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.247822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.247830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.248343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.248370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.248775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.248783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.249091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.249098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.249316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.249326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.249750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.249756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.250168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.250174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.250565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.250572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.250983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.250990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.251190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.251198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.251688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.251695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.252146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.252153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.059 qpair failed and we were unable to recover it. 00:29:32.059 [2024-07-15 22:26:57.252561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.059 [2024-07-15 22:26:57.252567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.252951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.252958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.253358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.253364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.253762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.253768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.253966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.253974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.254236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.254244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.254633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.254640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.255065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.255072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.255300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.255307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.255719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.255725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.256117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.256128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.256552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.256558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.256967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.256976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.257473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.257500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.257903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.257911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.258428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.258455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.258860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.258869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.259421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.259449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.259894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.259903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.260340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.260367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.260770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.260778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.261209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.261216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.261484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.261492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.261909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.261915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.262312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.262319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.262750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.262756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.263142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.263149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.263543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.263549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.263881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.263888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.264198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.264206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.264592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.264599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.265063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.265070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.265457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.265464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.265848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.265854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.266267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.266274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.266658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.266665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.267096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.267102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.267563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.267570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.267959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.267965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.268449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.268476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.060 qpair failed and we were unable to recover it. 00:29:32.060 [2024-07-15 22:26:57.268970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.060 [2024-07-15 22:26:57.268978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.269467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.269495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.269894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.269902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.270324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.270358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.270774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.270783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.271332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.271360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.271754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.271762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.272157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.272164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.272577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.272583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.272972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.272978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.273275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.273282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.273685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.273691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.274075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.274085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.274479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.274487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.274895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.274902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.275427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.275455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.275858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.275866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.276275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.276283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.276603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.276610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.277042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.277048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.277353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.277367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.277776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.277783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.278169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.278176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.278586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.278592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.278978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.278984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.279300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.279307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.279691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.279697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.280083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.280089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.280520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.280527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.280911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.280917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.281403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.281430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.281862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.281870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.282356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.282383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.282784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.282792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.283182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.283190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.283629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.283636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.284117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.284136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.284521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.284528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.284705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.284716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.285154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.285161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.061 [2024-07-15 22:26:57.285623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-07-15 22:26:57.285630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.061 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.286058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.286064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.286461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.286467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.286801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.286808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.287214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.287220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.287419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.287427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.287748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.287755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.288179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.288186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.288587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.288593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.288993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.289000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.289388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.289394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.289828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.289835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.290239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.290248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.290654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.290661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.291045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.291051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.291463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.291470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.291881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.291889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.292291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.292297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.292772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.292778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.292963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.292970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.293391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.293398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.293787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.293793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.294178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.294184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.294613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.294620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.295006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.295012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.295478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.295484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.295912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.295919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.296351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.296357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.296743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.296749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.297136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.297143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.297512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.297518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.297909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.297915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.298345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.298352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.298753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.298760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.062 qpair failed and we were unable to recover it. 00:29:32.062 [2024-07-15 22:26:57.299150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.062 [2024-07-15 22:26:57.299157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.299550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.299556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.299953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.299959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.300376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.300383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.300868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.300875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.301367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.301395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.301800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.301808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.302237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.302245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.302650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.302658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.303105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.303112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.303523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.303530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.303953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.303960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.304458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.304486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.304821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.304829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.305339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.305366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.305769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.305777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.306173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.306180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.306573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.306580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.306987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.306997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.307307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.307314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.307682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.307689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.308115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.308124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.308513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.308520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.308901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.308908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.309393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.309420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.309861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.309869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.310391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.310418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.310826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.310834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.311339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.311366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.311788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.311796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.312137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.312144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.312623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.312630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.313021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.313028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.313577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.313604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.314004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.314012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.314424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.314431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.314836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.314844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.315371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.315398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.315800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.315808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.316202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.316209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.063 [2024-07-15 22:26:57.316637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.063 [2024-07-15 22:26:57.316644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.063 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.317039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.317045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.317358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.317366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.317774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.317780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.318164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.318171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.318578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.318585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.318966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.318972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.319369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.319376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.319809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.319815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.320243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.320250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.320537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.320544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.320954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.320961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.321355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.321362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.321456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.321465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.321849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.321856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.322257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.322264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.322673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.322679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.323072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.323078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.323433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.323442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.323849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.323856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.324226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.324238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.324523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.324530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.324936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.324942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.325333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.325339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.325732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.325738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.326165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.326172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.326584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.326590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.326878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.326885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.327291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.327298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.327679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.327685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.328072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.328078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.328528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.328535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.328797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.328804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.329201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.329208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.329614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.329621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.330032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.330039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.330349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.330355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.330752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.330758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.331150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.331157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.331440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.331447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.331832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-07-15 22:26:57.331838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.064 qpair failed and we were unable to recover it. 00:29:32.064 [2024-07-15 22:26:57.332244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.332251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.332619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.332625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.333010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.333016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.333434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.333441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.333827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.333833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.334218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.334225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.334617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.334624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.334941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.334948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.335350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.335357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.335747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.335753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.336145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.336151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.336559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.336565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.336991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.336997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.337383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.337389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.337795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.337801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.338216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.338224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.338626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.338633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.339100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.339110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.339523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.339530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.339956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.339963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.340486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.340513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.340922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.340931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.341432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.341459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.341859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.341867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.342356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.342383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.342788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.342796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.343345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.343377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.343775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.343783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.344221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.344228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.344549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.344556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.344997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.345003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.345392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.345399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.345800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.345807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.346340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.346367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.346663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.346672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.347086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.347093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.347473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.347480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.347923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.347929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.348360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.348388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.348799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.065 [2024-07-15 22:26:57.348807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.065 qpair failed and we were unable to recover it. 00:29:32.065 [2024-07-15 22:26:57.349200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.349208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.349580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.349587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.349983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.349989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.350390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.350398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.350828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.350835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.351331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.351359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.351827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.351835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.352394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.352421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.352876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.352884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.353370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.353397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.353880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.353889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.354342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.354370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.354808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.354817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.355120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.355132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.355551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.355557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.355897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.355903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.356436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.356463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.356865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.356877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.357063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.357073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.357500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.357508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.357915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.357921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.358480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.358507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.358903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.358911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.359435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.359463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.359874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.359882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.360428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.360456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.360902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.360910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.361406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.361434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.361872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.361881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.362411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.362439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.066 [2024-07-15 22:26:57.362891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.066 [2024-07-15 22:26:57.362899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.066 qpair failed and we were unable to recover it. 00:29:32.381 [2024-07-15 22:26:57.363410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.381 [2024-07-15 22:26:57.363438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.381 qpair failed and we were unable to recover it. 00:29:32.381 [2024-07-15 22:26:57.363910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.381 [2024-07-15 22:26:57.363919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.381 qpair failed and we were unable to recover it. 00:29:32.381 [2024-07-15 22:26:57.364380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.381 [2024-07-15 22:26:57.364408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.381 qpair failed and we were unable to recover it. 00:29:32.381 [2024-07-15 22:26:57.364850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.381 [2024-07-15 22:26:57.364859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.381 qpair failed and we were unable to recover it. 00:29:32.381 [2024-07-15 22:26:57.365395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.381 [2024-07-15 22:26:57.365422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.381 qpair failed and we were unable to recover it. 00:29:32.381 [2024-07-15 22:26:57.365885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.381 [2024-07-15 22:26:57.365893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.381 qpair failed and we were unable to recover it. 00:29:32.381 [2024-07-15 22:26:57.366399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.381 [2024-07-15 22:26:57.366427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.381 qpair failed and we were unable to recover it. 00:29:32.381 [2024-07-15 22:26:57.366640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.381 [2024-07-15 22:26:57.366649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.381 qpair failed and we were unable to recover it. 00:29:32.381 [2024-07-15 22:26:57.367070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.381 [2024-07-15 22:26:57.367077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.381 qpair failed and we were unable to recover it. 00:29:32.381 [2024-07-15 22:26:57.367541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.381 [2024-07-15 22:26:57.367548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.381 qpair failed and we were unable to recover it. 00:29:32.381 [2024-07-15 22:26:57.367977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.381 [2024-07-15 22:26:57.367983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.381 qpair failed and we were unable to recover it. 00:29:32.381 [2024-07-15 22:26:57.368472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.381 [2024-07-15 22:26:57.368500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.381 qpair failed and we were unable to recover it. 00:29:32.381 [2024-07-15 22:26:57.368907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.381 [2024-07-15 22:26:57.368915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.381 qpair failed and we were unable to recover it. 00:29:32.381 [2024-07-15 22:26:57.369334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.381 [2024-07-15 22:26:57.369362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.381 qpair failed and we were unable to recover it. 00:29:32.381 [2024-07-15 22:26:57.369673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.381 [2024-07-15 22:26:57.369683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.381 qpair failed and we were unable to recover it. 00:29:32.381 [2024-07-15 22:26:57.369890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.381 [2024-07-15 22:26:57.369899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.370315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.370322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.370751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.370758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.371151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.371157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.371554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.371560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.371844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.371851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.372290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.372297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.372614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.372620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.373016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.373023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.373432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.373440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.373837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.373845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.374242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.374249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.374569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.374577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.374984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.374990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.375393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.375399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.375788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.375794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.376179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.376186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.376557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.376563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.377002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.377008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.377407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.377414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.377841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.377848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.378232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.378240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.378619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.378626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.379034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.379041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.379436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.379443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.379849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.379855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.380315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.380321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.380745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.380751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.381142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.381149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.381559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.381565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.381952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.381958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.382348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.382354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.382767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.382773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.383212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.383218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.383596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.383603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.383924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.383930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.384314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.384321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.384720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.384726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.385095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.385104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.385501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.385508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.382 qpair failed and we were unable to recover it. 00:29:32.382 [2024-07-15 22:26:57.385894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-07-15 22:26:57.385901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.386414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.386441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.386844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.386853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.387333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.387360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.387782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.387791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.387959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.387969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.388363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.388370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.388759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.388766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.388858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.388865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.389159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.389166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.389560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.389566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.389959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.389965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.390356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.390363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.390680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.390686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.390889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.390896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.391287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.391295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.391705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.391712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.392119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.392135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.392544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.392551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.392948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.392955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.393484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.393516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.393972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.393981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.394542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.394570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.394821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.394831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.395265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.395273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.395667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.395674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.396065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.396073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.396477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.396484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.396796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.396803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.397213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.397220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.397577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.397584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.397998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.398005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.398399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.398405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.398686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.398692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.399151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.399158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.399576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.399582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.400015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.400021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.400340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.400347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.400722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.400731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.401138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.401145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.383 [2024-07-15 22:26:57.401527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.383 [2024-07-15 22:26:57.401535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.383 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.401952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.401958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.402330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.402338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.402744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.402750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.403137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.403144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.403546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.403553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.403932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.403939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.404349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.404356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.404757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.404763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.405162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.405169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.405566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.405573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.405996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.406002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.406296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.406303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.406689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.406695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.407078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.407084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.407498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.407505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.407885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.407891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.408386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.408413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.408832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.408840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.409216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.409223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.409617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.409624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.410060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.410066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.410511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.410518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.410925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.410932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.411355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.411382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.411780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.411788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.412173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.412180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.412556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.412562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.413008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.413014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.413383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.413390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.413802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.413808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.414208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.414215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.414628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.414635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.414838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.414847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.415285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.415292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.415495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.415503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.415959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.415966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.416110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.416117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.416497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.416507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.416814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.416821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.417235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.384 [2024-07-15 22:26:57.417242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.384 qpair failed and we were unable to recover it. 00:29:32.384 [2024-07-15 22:26:57.417530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.417536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.417941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.417948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.418341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.418348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.418764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.418771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.419130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.419137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.419538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.419544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.419881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.419887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.420077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.420085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.420456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.420462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.420871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.420879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.421286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.421292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.421691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.421697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.422118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.422128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.422524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.422531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.422951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.422958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.423481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.423509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.423954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.423962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.424506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.424534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.424992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.425000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.425500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.425528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.425839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.425848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.426381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.426409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.426845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.426853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.427357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.427384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.427701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.427710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.428130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.428138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.428523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.428530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.428820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.428826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.429325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.429352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.429765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.429773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.430172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.430179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.430577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.430583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.385 [2024-07-15 22:26:57.430996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.385 [2024-07-15 22:26:57.431003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.385 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.431318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.431325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.431734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.431740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.432171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.432178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.432600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.432606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.432910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.432920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.433313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.433320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.433600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.433607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.433800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.433811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.434184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.434191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.434511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.434518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.434818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.434824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.435209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.435216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.435667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.435673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.435935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.435942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.436350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.436357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.436657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.436664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.436965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.436971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.437385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.437392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.437808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.437815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.438102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.438109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.438534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.438541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.439006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.439013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.439209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.439218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.439510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.439517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.439908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.439916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.440351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.440358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.440678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.440685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.441093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.441099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.441490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.441496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.441885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.441891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.442181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.442187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.410 [2024-07-15 22:26:57.442599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.410 [2024-07-15 22:26:57.442606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.410 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.443000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.443007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.443410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.443417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.443735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.443742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.444148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.444155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.444557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.444563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.444959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.444965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.445355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.445362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.445755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.445762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.446206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.446213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.446654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.446660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.447046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.447053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.447508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.447514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.447915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.447923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.448454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.448481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.448799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.448808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.449251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.449258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.449710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.449716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.450109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.450115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.450319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.450328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.450633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.450639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.451063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.451070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.451527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.451533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.451924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.451930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.452344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.452351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.452780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.452788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.453113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.453119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.453491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.453497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.453862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.453868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.454359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.454386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.454845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.454853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.455395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.455422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.455712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.455720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.456224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.456232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.456503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.456510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.456923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.411 [2024-07-15 22:26:57.456930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.411 qpair failed and we were unable to recover it. 00:29:32.411 [2024-07-15 22:26:57.457239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.457247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.457667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.457673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.458074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.458081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.458489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.458495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.458801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.458808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.459223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.459230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.459675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.459681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.460064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.460070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.460470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.460478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.460863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.460869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.461267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.461274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.461707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.461713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.462115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.462125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.462522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.462529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.462921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.462928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.463439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.463467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.463916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.463924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.464481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.464512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.464914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.464922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.465334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.465361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.465763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.465771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.466343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.466370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.466689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.466698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.467198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.467206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.467419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.467429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.467857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.467864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.468255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.468262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.468585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.468592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.469014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.469020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.469441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.469447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.469714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.469721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.470136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.470143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.470445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.470451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.470872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.470880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.471338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.471345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.471669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.471677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.472111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.472118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.472514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.472521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.472911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.412 [2024-07-15 22:26:57.472918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.412 qpair failed and we were unable to recover it. 00:29:32.412 [2024-07-15 22:26:57.473236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.473243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.473676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.473683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.474068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.474074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.474457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.474464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.474863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.474869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.475266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.475274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.475682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.475688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.476075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.476081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.476538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.476546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.476957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.476964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.477454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.477482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.477881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.477890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.478095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.478105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.478508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.478516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.478924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.478931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.479480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.479508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.479910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.479919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.480438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.480465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.480820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.480832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.481328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.481356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.481831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.481839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.482356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.482384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.482789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.482797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.483557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.483572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.483959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.483966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.484489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.484516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.484917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.484925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.485424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.485452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.485853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.485862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.486396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.486424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.486837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.486846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.487341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.487369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.487771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.487779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.488090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.488098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.488510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.488518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.488904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.413 [2024-07-15 22:26:57.488912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.413 qpair failed and we were unable to recover it. 00:29:32.413 [2024-07-15 22:26:57.489409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.489438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.489757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.489766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.490182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.490190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.490581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.490588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.491069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.491075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.491276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.491285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.491665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.491672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.492051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.492057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.492479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.492486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.492925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.492932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.493341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.493348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.493780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.493787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.494200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.494207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.494601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.494607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.494919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.494926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.495244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.495251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.495666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.495673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.496060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.496067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.496358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.496365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.496676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.496682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.497083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.497089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.497470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.497478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.497799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.497807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.498238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.498245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.498772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.498779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.498931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.498939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.499310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.499317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.499738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.499745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.500146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.500153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.500540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.414 [2024-07-15 22:26:57.500547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.414 qpair failed and we were unable to recover it. 00:29:32.414 [2024-07-15 22:26:57.500989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.500995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.501405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.501411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.501777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.501784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.502100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.502107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.502485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.502492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.502879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.502886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.503409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.503437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.503840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.503848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.504054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.504065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.504491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.504498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.504891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.504898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.505379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.505407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.505811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.505819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.506209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.506216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.506713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.506721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.507040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.507048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.507369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.507376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.507788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.507795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.508205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.508212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.508659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.508666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.509126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.509133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.509515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.509521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.509813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.509821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.510229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.510235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.510657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.510664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.511094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.511101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.511519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.511526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.511933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.511941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.512396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.512424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.512856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.512864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.513359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.513387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.513793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.513801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.514194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.514205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.514629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.514635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.515025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.515031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.515457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.415 [2024-07-15 22:26:57.515463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.415 qpair failed and we were unable to recover it. 00:29:32.415 [2024-07-15 22:26:57.515659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.515669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.516098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.516105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.516527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.516534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.516946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.516953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.517359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.517366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.517757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.517763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.518147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.518154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.518448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.518455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.518787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.518794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.519197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.519204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.519645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.519651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.520041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.520047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.520511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.520519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.520937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.520944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.521347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.521354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.521813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.521820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.522224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.522231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.522637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.522644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.523038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.523045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.523427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.523434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.523867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.523874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.524268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.524275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.524755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.524761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.525142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.525149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.525556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.525563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.525950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.525956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.526285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.526292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.526751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.526757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.527058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.527066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.527479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.527486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.527798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.527805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.528214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.528221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.528614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.528621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.528822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.528832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.529133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.529140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.529523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.529531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.529936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.529951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.530270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.530277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.530682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.416 [2024-07-15 22:26:57.530688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.416 qpair failed and we were unable to recover it. 00:29:32.416 [2024-07-15 22:26:57.531075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.531082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.531540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.531547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.531974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.531981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.532485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.532512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.532917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.532926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.533457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.533485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.533885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.533893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.534435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.534463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.534930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.534939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.535459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.535486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.535916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.535925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.536441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.536468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.536870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.536878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.537365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.537392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.537797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.537805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.538008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.538017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.538406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.538413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.538814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.538821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.539249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.539257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.539681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.539688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.540070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.540077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.540512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.540520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.540914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.540921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.541328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.541335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.541739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.541747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.542135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.542142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.542550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.542556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.542940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.542946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.543341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.543348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.543736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.543742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.544024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.544031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.544451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.544458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.544858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.544865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.545293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.545301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.545693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.545700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.546129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.546136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.546544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.546551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.546862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.546870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.547383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.547411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.417 [2024-07-15 22:26:57.547809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.417 [2024-07-15 22:26:57.547818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.417 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.548234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.548241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.548637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.548644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.549076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.549084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.549483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.549490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.549897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.549904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.550420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.550447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.550879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.550889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.551415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.551442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.551838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.551846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.552231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.552238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.552674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.552681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.553091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.553099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.553511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.553519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.553865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.553873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.554360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.554387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.554805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.554814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.555224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.555232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.555533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.555541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.555980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.555987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.556394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.556401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.556592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.556603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.557005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.557012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.557476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.557483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.557883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.557889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.558308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.558316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.558745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.558753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.559145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.559152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.559529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.559535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.559920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.559926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.560352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.560359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.560757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.560764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.561177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.561184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.561571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.561577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.561876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.561884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.562212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.562218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.562642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.562648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.563031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.563037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.563320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.563330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.563719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.563726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.564128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.418 [2024-07-15 22:26:57.564134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.418 qpair failed and we were unable to recover it. 00:29:32.418 [2024-07-15 22:26:57.564639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.564646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.565040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.565047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.565445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.565452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.565857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.565863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.566344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.566371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.566807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.566815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.567213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.567220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.567626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.567633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.567855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.567861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.568263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.568269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.568656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.568662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.569071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.569078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.569359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.569366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.569747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.569754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.570218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.570227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.570686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.570694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.570988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.570995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.571408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.571416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.571852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.571859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.572404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.572432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.572852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.572860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.573333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.573361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.573770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.573779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.574241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.574249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.574683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.574691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.575084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.575091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.575574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.575582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.575983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.575991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.576499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.576527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.576924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.576933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.577331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.577359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.577651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.577660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.578090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.578098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.419 [2024-07-15 22:26:57.578483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.419 [2024-07-15 22:26:57.578491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.419 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.578903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.578910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.579390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.579418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.579841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.579850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.580352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.580380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.580807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.580816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.581225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.581233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.581675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.581682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.582103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.582111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.582522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.582529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.582941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.582948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.583450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.583477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.583787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.583796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.584208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.584216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.584510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.584517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.584919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.584926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.585345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.585352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.585786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.585792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.586205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.586213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.586631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.586637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.587063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.587069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.587596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.587603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.588046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.588052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.588963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.588980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.589420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.589428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.589817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.589824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.590402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.590417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.590734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.590742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.591141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.591149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.591544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.591552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.591939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.591946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.592431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.592440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.592838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.592844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.593143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.593151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.593572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.593579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.594013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.594020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.594473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.594479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.594871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.594878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.595285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.595293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.595707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.595714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.596118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.420 [2024-07-15 22:26:57.596128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.420 qpair failed and we were unable to recover it. 00:29:32.420 [2024-07-15 22:26:57.596522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.596529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.596957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.596964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.597357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.597383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.597831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.597839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.598380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.598408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.598820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.598828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.599220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.599228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.599653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.599661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.600093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.600101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.600513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.600520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.600912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.600919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.601447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.601474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.601880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.601889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.602401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.602428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.602884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.602892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.603382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.603409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.603811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.603819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.604336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.604364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.604793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.604801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.605234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.605241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.605551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.605558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.605968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.605975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.606297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.606304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.606686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.606693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.607119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.607131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.607436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.607442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.607839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.607845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.608245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.608254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.608688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.608695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.609071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.609077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.609508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.609518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.609971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.609978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.610373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.610400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.610821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.610829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.611359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.611386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.611789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.611797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.612187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.612195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.612621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.612628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.613110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.613117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.613525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.613532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.421 qpair failed and we were unable to recover it. 00:29:32.421 [2024-07-15 22:26:57.613957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.421 [2024-07-15 22:26:57.613963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.614462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.614490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.614890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.614898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.615398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.615425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.615828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.615838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.616341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.616368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.616835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.616843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.617348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.617375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.617777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.617785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.618180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.618188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.618600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.618607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.618999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.619006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.619483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.619490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.619793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.619800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.620360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.620388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.620831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.620840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.621229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.621236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.621657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.621665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.621989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.621996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.622423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.622430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.622820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.622827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.623342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.623370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.623801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.623810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.624242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.624249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.624638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.624645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.625031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.625038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.625301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.625309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.625703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.625709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.626116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.626131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.626545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.626551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.626936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.626946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.627500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.627528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.627920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.627928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.628431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.628458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.628862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.628870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.629363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.629391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.629790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.629799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.630208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.630215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.630503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.630510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.630937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.422 [2024-07-15 22:26:57.630944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.422 qpair failed and we were unable to recover it. 00:29:32.422 [2024-07-15 22:26:57.631329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.631336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.631766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.631773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.632198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.632204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.632645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.632652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.633049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.633057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.633476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.633483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.633869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.633875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.634289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.634297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.634730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.634737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.635128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.635136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.635517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.635524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.635830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.635838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.636353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.636380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.636783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.636791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.637180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.637187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.637586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.637593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.637797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.637806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.638224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.638232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.638633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.638639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.639034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.639040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.639444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.639450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.639886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.639893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.640322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.640330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.640754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.640760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.641222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.641229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.641638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.641645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.642036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.642042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.642351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.642358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.642747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.642753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.643044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.643051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.643461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.643470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.643537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.643545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.643906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.643914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.644232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.644239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.423 [2024-07-15 22:26:57.644685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.423 [2024-07-15 22:26:57.644692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.423 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.645085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.645091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.645602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.645609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.646039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.646045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.646436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.646443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.646870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.646877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.647265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.647272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.647664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.647671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.648079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.648087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.648415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.648423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.648857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.648864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.649177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.649184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.649616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.649622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.650036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.650044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.650369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.650376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.650833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.650840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.651239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.651246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.651644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.651651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.652084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.652091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.652498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.652506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.652920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.652928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.653489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.653516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.653945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.653954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.654468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.654496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.654913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.654922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.655440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.655471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.655880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.655888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.656358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.656385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.656791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.656799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.657408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.657435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.657917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.657925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.658375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.658402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.658807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.658815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.659356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.659384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.659788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.659797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.660114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.660121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.660615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.660625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.661043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.661050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.661523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.661551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.662010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.424 [2024-07-15 22:26:57.662019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.424 qpair failed and we were unable to recover it. 00:29:32.424 [2024-07-15 22:26:57.662222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.662229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.662646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.662652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.663046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.663053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.663451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.663459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.663845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.663852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.664288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.664295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.664690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.664697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.665104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.665110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.665428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.665436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.665836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.665842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.666244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.666251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.666639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.666647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.667058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.667064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.667472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.667479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.667947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.667954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.668462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.668490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.668966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.668974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.669493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.669520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.670070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.670079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.670622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.670650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.671090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.671098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.671599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.671628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.671956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.671964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.672497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.672526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.672936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.672945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.673441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.673468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.673920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.673928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.674449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.674476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.675028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.675037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.675486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.675513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.675974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.675982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.676416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.676443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.676893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.676901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.677439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.677468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.677887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.677896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.678360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.678387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.678861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.678872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.679342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.679370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.679676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.679686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.680007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.680013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.425 [2024-07-15 22:26:57.680522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.425 [2024-07-15 22:26:57.680529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.425 qpair failed and we were unable to recover it. 00:29:32.426 [2024-07-15 22:26:57.680929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.426 [2024-07-15 22:26:57.680935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.426 qpair failed and we were unable to recover it. 00:29:32.426 [2024-07-15 22:26:57.681273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.426 [2024-07-15 22:26:57.681280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.426 qpair failed and we were unable to recover it. 00:29:32.426 [2024-07-15 22:26:57.681654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.426 [2024-07-15 22:26:57.681661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.426 qpair failed and we were unable to recover it. 00:29:32.426 [2024-07-15 22:26:57.681951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.426 [2024-07-15 22:26:57.681958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.426 qpair failed and we were unable to recover it. 00:29:32.426 [2024-07-15 22:26:57.682257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.426 [2024-07-15 22:26:57.682264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.426 qpair failed and we were unable to recover it. 00:29:32.426 [2024-07-15 22:26:57.682662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.426 [2024-07-15 22:26:57.682668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.426 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.683065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.683072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.683491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.683499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.683733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.683741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.684154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.684161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.684624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.684630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.685034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.685040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.685352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.685359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.685650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.685657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.686048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.686054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.686475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.686482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.686790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.686797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.687201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.687208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.687308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.687314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.687726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.687732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.688148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.688156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.688605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.688614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.689053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.689059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.699 qpair failed and we were unable to recover it. 00:29:32.699 [2024-07-15 22:26:57.689403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.699 [2024-07-15 22:26:57.689411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.689692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.689698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.690073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.690080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.690494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.690501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.690824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.690830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.691246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.691252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.691658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.691664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.692022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.692030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.692289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.692296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.692693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.692700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.693133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.693140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.693444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.693451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.693839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.693847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.694253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.694260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.694676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.694683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.695069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.695075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.695520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.695527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.695915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.695921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.696326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.696333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.696762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.696769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.697080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.697087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.697509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.697516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.697713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.697722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.698183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.698190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.698593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.698600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.698921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.698928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.699337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.699344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.699798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.699805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.700131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.700138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.700546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.700552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.700940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.700947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.701272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.701280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.701686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.701692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.702080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.702087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.702384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.702390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.702816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.702822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.703826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.703844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.704226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.704234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.704653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.704659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.705086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.705092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.700 qpair failed and we were unable to recover it. 00:29:32.700 [2024-07-15 22:26:57.705489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.700 [2024-07-15 22:26:57.705496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.705804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.705811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.706220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.706227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.706680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.706686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.707076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.707083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.707417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.707424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.707845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.707852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.708276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.708283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.708684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.708691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.709097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.709104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.709514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.709521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.709957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.709965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.710468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.710499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.710898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.710906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.711451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.711478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.711921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.711929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.712430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.712458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.712863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.712871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.713402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.713429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.713834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.713842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.714373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.714400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.714808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.714817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.715207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.715215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.715634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.715641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.716096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.716103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.716307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.716317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.716738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.716745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.717132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.717140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.717545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.717551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.717962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.717968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.718405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.718412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.718808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.718815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.719083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.719090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.719513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.719521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.719909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.719915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.720422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.701 [2024-07-15 22:26:57.720449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.701 qpair failed and we were unable to recover it. 00:29:32.701 [2024-07-15 22:26:57.720849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.720857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.721351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.721378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.721810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.721819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.722203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.722210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.722642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.722648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.723053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.723059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.723453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.723460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.723885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.723891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.724314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.724321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.724751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.724758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.725181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.725188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.725628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.725635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.726041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.726047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.726329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.726336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.726753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.726760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.727144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.727151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.727456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.727464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.727781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.727788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.728114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.728120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.728513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.728519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.728829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.728836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.729240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.729248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.729642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.729648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.730036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.730043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.730424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.730431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.730921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.730927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.731319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.731326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.731722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.731729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.732137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.732144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.732590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.732596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.732995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.733002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.733393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.733400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.733836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.733842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.734344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.734372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.734774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.734782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.735176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.735184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.735597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.735605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.736017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.736024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.736441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.736449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.736879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.702 [2024-07-15 22:26:57.736885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.702 qpair failed and we were unable to recover it. 00:29:32.702 [2024-07-15 22:26:57.737283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.737290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.737603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.737610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.738127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.738135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.738439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.738446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.738853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.738859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.739244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.739250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.739679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.739687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.739996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.740003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.740398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.740405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.740788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.740795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.741301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.741328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.741779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.741788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.742180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.742187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.742623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.742629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.743027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.743034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.743344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.743352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.743754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.743764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.744153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.744160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.744577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.744584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.745015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.745023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.745457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.745464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.745869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.745875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.746260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.746267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.746553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.746560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.746972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.746978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.747282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.747288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.747573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.747580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.748002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.748010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.748412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.748419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.748835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.748842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.749274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.749281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.749681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.749687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.750075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.750081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.750599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.750606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.750994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.751000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.703 [2024-07-15 22:26:57.751498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.703 [2024-07-15 22:26:57.751526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.703 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.751977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.751986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.752501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.752530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.752873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.752882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.753406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.753433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.753900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.753908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.754405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.754433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.754865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.754873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.755407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.755434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.755840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.755848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.756341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.756368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.756683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.756693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.757005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.757012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.757291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.757299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.757657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.757663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.758053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.758059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.758388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.758395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.758718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.758725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.759152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.759159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.759611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.759617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.760008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.760014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.760416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.760426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.760815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.760821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.761202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.761209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.761508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.761522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.761720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.761730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.762164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.762171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.762546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.762553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.762986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.762993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.763394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.763401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.763697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.763705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.764105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.764111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.764511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.764518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.764896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.764902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.765334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.765341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.765752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.765758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.704 qpair failed and we were unable to recover it. 00:29:32.704 [2024-07-15 22:26:57.766161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.704 [2024-07-15 22:26:57.766168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.766521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.766527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.766911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.766918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.767232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.767240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.767671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.767677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.768068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.768074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.768479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.768486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.768760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.768767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.769162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.769169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.769599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.769605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.769989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.769995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.770302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.770308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.770737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.770743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.771129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.771135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.771605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.771612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.771893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.771899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.772309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.772316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.772741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.772747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.773131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.773138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.773548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.773554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.773842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.773850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.774236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.774242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.774637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.774644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.775062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.775069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.775471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.775478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.775863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.775870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.776259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.776266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.776678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.776693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.776981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.776987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.777299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.777306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.777688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.777695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.778086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.778093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.778521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.778529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.778933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.778940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.779437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.779464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.779873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.779881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.780376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.780404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.780812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.780820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.781231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.781238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.781649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.705 [2024-07-15 22:26:57.781657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.705 qpair failed and we were unable to recover it. 00:29:32.705 [2024-07-15 22:26:57.782092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.782099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.782510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.782517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.782904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.782911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.783400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.783427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.783872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.783880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.784381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.784409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.784820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.784829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.785143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.785151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.785585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.785592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.785987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.785994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.786427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.786435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.786869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.786875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.787399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.787426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.787852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.787861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.788342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.788369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.788849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.788857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.789055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.789064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.789395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.789402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.789808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.789814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.790213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.790220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.790648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.790654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.791078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.791086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.791503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.791511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.791936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.791944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.792468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.792495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.792770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.792782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.793199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.793207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.793614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.793621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.794013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.794019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.794440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.794447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.794842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.794849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.795240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.795246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.795688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.795694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.796118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.796129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.706 qpair failed and we were unable to recover it. 00:29:32.706 [2024-07-15 22:26:57.796552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.706 [2024-07-15 22:26:57.796559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.796976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.796983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.797472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.797500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.797986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.797994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.798492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.798519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.798927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.798936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.799460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.799488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.799900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.799908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.800401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.800428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.800860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.800869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.801363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.801391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.801601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.801611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.802143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.802151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.802568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.802575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.802964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.802971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.803407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.803414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.803802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.803808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.804008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.804016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.804425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.804435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.804823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.804829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.805146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.805154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.805568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.805575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.805967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.805973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.806282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.806290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.806705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.806712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.807105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.807113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.807600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.807607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.808021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.808028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.808449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.808456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.808886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.808893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.809413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.809441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.809756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.809764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.810068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.810075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.810558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.810565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.810965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.810972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.811472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.811500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.811945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.811953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.812471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.812498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.812930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.812938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.813506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.813534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.707 [2024-07-15 22:26:57.813978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.707 [2024-07-15 22:26:57.813986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.707 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.814478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.814507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.814941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.814949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.815449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.815476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.815911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.815919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.816429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.816457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.816931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.816940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.817469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.817497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.817889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.817897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.818342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.818369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.818847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.818855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.819353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.819381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.819781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.819789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.820306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.820334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.820793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.820802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.821208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.821216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.821643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.821650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.821925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.821934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.822045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.822057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.822475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.822483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.822910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.822917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.823383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.823390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.823784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.823791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.824209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.824217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.824624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.824632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.825038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.825044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.825439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.825446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.825749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.825756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.826035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.826041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.826372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.826379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.826717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.826723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.827120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.827131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.827522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.827529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.827930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.827938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.708 qpair failed and we were unable to recover it. 00:29:32.708 [2024-07-15 22:26:57.828364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.708 [2024-07-15 22:26:57.828371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.828760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.828767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.829319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.829347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.829765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.829773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.830202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.830210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.830539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.830546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.830964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.830973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.831365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.831372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.831847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.831854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.832329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.832357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.832760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.832769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.833062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.833069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.833480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.833487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.833885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.833892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.834398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.834426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.834857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.834865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.835359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.835386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.835790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.835798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.836265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.836273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.836734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.836740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.837135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.837142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.837453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.837461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.837867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.837874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.838390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.838418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.838818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.838829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.839036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.839046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.839435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.839442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.839747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.839754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.840187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.840194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.840586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.840593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.840981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.840988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.841196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.841204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.841612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.841619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.842012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.842019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.842452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.842459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.842852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.842859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.843250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.843257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.843678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.843685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.844101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.844108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.844516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.844524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.709 [2024-07-15 22:26:57.844865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.709 [2024-07-15 22:26:57.844872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.709 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.845073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.845083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.845452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.845460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.845883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.845890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.846313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.846319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.846588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.846595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.847003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.847009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.847403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.847410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.847720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.847728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.848135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.848142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.848557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.848563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.848994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.849000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.849397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.849404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.849789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.849795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.850182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.850188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.850602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.850608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.851014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.851020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.851413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.851421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.851830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.851836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.852222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.852228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.852653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.852659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.853044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.853051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.853473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.853480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.853902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.853909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.854186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.854195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.854590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.854597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.855004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.855010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.855311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.855319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.855743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.855749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.856050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.856056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.856424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.856431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.856829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.856836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.857252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.857259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.857571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.857578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.857989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.857996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.858381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.858389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.858818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.858824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.859137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.859144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.859553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.859559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.859953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.710 [2024-07-15 22:26:57.859960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.710 qpair failed and we were unable to recover it. 00:29:32.710 [2024-07-15 22:26:57.860164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.860173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.860559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.860567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.860970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.860977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.861384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.861390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.861778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.861785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.862208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.862215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.862428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.862434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.862860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.862866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.863252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.863265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.863687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.863694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.864113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.864120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.864533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.864541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.864966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.864973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.865469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.865496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.865978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.865986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.866483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.866510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.866990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.866998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.867490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.867517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.867916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.867926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.868470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.868498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.868902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.868911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.869413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.869440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.869843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.869852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.870362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.870389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.870820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.870831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.871341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.871369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.871762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.871770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.872164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.872172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.872557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.872565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.872978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.872984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.873194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.873204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.873625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.873632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.874028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.874035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.874447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.874454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.874851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.874857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.875156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.875163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.875490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.875496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.875880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.875887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.876299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.876306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.876716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.876723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.711 [2024-07-15 22:26:57.877149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.711 [2024-07-15 22:26:57.877155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.711 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.877637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.877643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.878027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.878034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.878349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.878356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.878761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.878767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.879154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.879161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.879584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.879590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.880014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.880020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.880480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.880486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.880889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.880896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.881302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.881309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.881737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.881743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.882136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.882143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.882522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.882528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.882952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.882959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.883393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.883399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.883831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.883837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.884233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.884240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.884675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.884682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.885075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.885081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.885475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.885481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.885908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.885914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.886332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.886360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.886820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.886828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.887221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.887231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.887622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.887629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.888039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.888046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.888456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.888463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.888933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.888941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.889433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.889460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.889861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.889869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.890393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.890420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.890876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.890885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.891403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.891431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.891913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.891921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.712 [2024-07-15 22:26:57.892441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.712 [2024-07-15 22:26:57.892469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.712 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.892900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.892909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.893104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.893113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.893542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.893549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.893982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.893988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.894379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.894407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.894704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.894713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.895119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.895131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.895557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.895563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.895913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.895920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.896497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.896526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.896841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.896850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.897379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.897407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.897709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.897718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.898129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.898137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.898529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.898536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.898963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.898971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.899500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.899527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.899938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.899946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.900446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.900473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.900901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.900910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.901415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.901443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.901846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.901853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.902337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.902364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.902767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.902776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.903211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.903218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.903544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.903551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.903978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.903985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.904192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.904201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.904586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.904597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.905006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.905012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.905443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.905450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.905839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.905846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.906045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.906052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.906431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.906438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.906831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.906837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.907228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.907235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.907521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.907528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.907840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.907847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.908298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.908304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.908473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.713 [2024-07-15 22:26:57.908480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.713 qpair failed and we were unable to recover it. 00:29:32.713 [2024-07-15 22:26:57.908954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.908961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.909273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.909281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.909700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.909707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.910096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.910103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.910502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.910509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.910812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.910819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.911214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.911220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.911617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.911624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.911846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.911853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.912264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.912271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.912704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.912710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.913107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.913115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.913501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.913508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.913797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.913805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.914210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.914218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.914646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.914653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.915038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.915044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.915500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.915507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.915897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.915903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.916368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.916375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.916758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.916764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.917150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.917156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.917458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.917465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.917891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.917898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.918327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.918334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.918595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.918601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.918894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.918901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.919334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.919341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.919730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.919737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.920131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.920138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.920540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.920547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.920984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.920990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.921425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.921432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.921752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.921760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.922051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.922060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.922300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.922307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.922601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.922608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.923037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.923045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.923446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.923453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.923852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.714 [2024-07-15 22:26:57.923860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.714 qpair failed and we were unable to recover it. 00:29:32.714 [2024-07-15 22:26:57.924295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.924303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.924705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.924713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.925130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.925138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.925529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.925536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.925939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.925946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.926357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.926365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.926682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.926690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.927129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.927137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.927531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.927538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.927951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.927959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.928464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.928492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.928927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.928937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.929444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.929472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.929891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.929900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.930309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.930337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.930755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.930765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.931085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.931092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.931525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.931533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.931946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.931953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.932459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.932487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.932925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.932935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.933450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.933478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.933796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.933806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.934065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.934072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.934465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.934473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.934897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.934904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.935427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.935456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.935894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.935903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.936436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.936468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.936888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.936897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.937417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.937444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.937889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.937898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.938340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.938367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.938776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.938784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.939219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.939226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.939644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.939651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.940048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.940056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.940512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.940520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.940913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.940920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.941316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.941344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.715 [2024-07-15 22:26:57.941663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.715 [2024-07-15 22:26:57.941673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.715 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.941920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.941927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.942334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.942342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.942722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.942728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.942997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.943004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.943448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.943454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.943843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.943850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.944388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.944416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.944708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.944717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.945146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.945154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.945622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.945628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.946025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.946031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.946292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.946300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.946742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.946749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.947133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.947139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.947543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.947552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.947826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.947833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.948230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.948237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.948621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.948628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.949033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.949039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.949436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.949443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.949861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.949868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.950133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.950141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.950554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.950560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.950944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.950952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.951361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.951369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.951820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.951826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.952213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.952220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.952611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.952619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.953062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.953069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.953274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.953285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.953559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.953566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.954018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.954024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.954389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.954396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.954815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.954822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.955106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.955113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.955547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.955554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.955936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.955942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.716 [2024-07-15 22:26:57.956349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 22:26:57.956355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.716 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.956649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.956656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.957056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.957064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.957450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.957457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.957856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.957864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.958214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.958220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.958638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.958644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.959074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.959080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.959468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.959476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.959886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.959893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.960098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.960107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.960511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.960518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.960910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.960917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.961401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.961428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.961909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.961918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.962323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.962350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.962667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.962675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.963144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.963152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.963456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.963464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.963855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.963861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.964250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.964257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.964640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.964646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.965060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.965067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.965495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.965502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.965938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.965945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.966386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.966393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.966781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.966788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.967183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.967190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.967493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.967503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.967946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.967952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.968270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.968279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.968708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 22:26:57.968715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.717 qpair failed and we were unable to recover it. 00:29:32.717 [2024-07-15 22:26:57.969112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.969118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.969419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.969426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.969863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.969869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.970267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.970273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.970697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.970704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.971146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.971153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.971554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.971561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.971972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.971978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.972450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.972456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.972847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.972853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.973364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.973392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.973828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.973836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.974105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.974112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.974538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.974546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.974948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.974955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.975405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.975433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.975834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.975842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.976411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.976438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.976714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.976723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.977065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.977072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.977524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.977531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.977920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.977926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.978440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.978467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.978878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.978887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.979399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.979428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.979831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.979840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.980357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.980385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.980722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.980731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.981089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.981096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.981556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.981563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.981952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.981958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.982540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.982567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.982875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.982883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.983353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.983380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.983817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.983825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.984357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.984384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.984707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.984715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.985150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.985158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.985230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.985243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.718 qpair failed and we were unable to recover it. 00:29:32.718 [2024-07-15 22:26:57.985660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 22:26:57.985668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.986028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.986035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.986451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.986457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.986847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.986853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.987250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.987257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.987669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.987677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.988081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.988088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.988408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.988416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.988589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.988598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.989004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.989012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.989439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.989446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.989838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.989845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.990149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.990157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.990588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.990594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.991021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.991027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.991326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.991332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.991750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.991757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.992144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.992151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.992528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.992535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.992970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.992977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.993408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.993415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.993810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.993817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.994208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.994216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.994644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.994651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.995059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.995066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.995471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.995478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.995876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.995882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.996280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.996287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.996707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.996714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.997105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.997111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.997540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.997547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.997952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.719 [2024-07-15 22:26:57.997958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.719 qpair failed and we were unable to recover it. 00:29:32.719 [2024-07-15 22:26:57.998437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:57.998465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:57.998884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:57.998893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:57.999426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:57.999453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:57.999858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:57.999867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.000369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.000397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.000795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.000803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.001188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.001195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.001600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.001609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.001992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.001999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.002312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.002320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.002710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.002717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.003098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.003105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.003588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.003595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.004006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.004012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.004400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.004406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.004718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.004725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.005157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.005164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.005567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.005574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.005961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.005967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.006364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.006371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.006806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.006813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.007226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.007234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.007671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.007678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.008096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.008104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.008578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.008585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.009021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.009028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.009488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.009494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.009888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.009894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.010402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.010430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.010829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.010838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.011394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.011422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.720 [2024-07-15 22:26:58.011894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.720 [2024-07-15 22:26:58.011903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.720 qpair failed and we were unable to recover it. 00:29:32.990 [2024-07-15 22:26:58.012443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.990 [2024-07-15 22:26:58.012471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.990 qpair failed and we were unable to recover it. 00:29:32.990 [2024-07-15 22:26:58.012874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.990 [2024-07-15 22:26:58.012882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.990 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.013397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.013429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.013872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.013881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.014412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.014440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.014917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.014926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.015433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.015461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.015767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.015776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.016156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.016163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.016659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.016666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.017055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.017062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.017462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.017470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.017874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.017881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.018266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.018273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.018681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.018689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.019113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.019121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.019547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.019554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.019961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.019968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.020486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.020513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.020915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.020923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.021461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.021488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.021888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.021897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.022426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.022453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.022884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.022893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.023404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.023432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.023839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.023847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.024398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.024426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.024859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.024868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.025388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.025415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.025832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.025840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.026046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.026056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.026473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.026481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.026875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.026881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.027293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.027300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.027725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.027732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.028172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.028178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.028662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.028668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.029067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.029074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.029474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.029481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.029904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.029911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.030426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.030454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.991 [2024-07-15 22:26:58.030937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.991 [2024-07-15 22:26:58.030946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.991 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.031464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.031497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.031893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.031901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.032403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.032431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.032742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.032751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.033212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.033220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.033632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.033639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.034014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.034020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.034518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.034526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.034913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.034919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.035333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.035340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.035741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.035749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.036159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.036166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.036557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.036563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.036991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.036997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.037430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.037437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.037868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.037874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.038373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.038400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.038833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.038841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.039398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.039425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.039831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.039839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.040231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.040238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.040677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.040684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.041093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.041101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.041372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.041381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.041766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.041773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.042182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.042189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.042597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.042604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.043014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.043021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.043478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.043485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.043653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.043663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.044115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.044125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.044512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.044519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.044902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.044909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.045337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.045344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.045760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.045767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.046184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.046192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.046518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.046524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.046977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.046984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.047416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.047424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.992 qpair failed and we were unable to recover it. 00:29:32.992 [2024-07-15 22:26:58.047834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.992 [2024-07-15 22:26:58.047841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.048392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.048423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.048828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.048837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.049267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.049275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.049689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.049696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.050089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.050096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.050479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.050487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.050903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.050909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.051468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.051495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.051897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.051906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.052397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.052425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.052740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.052749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.053160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.053167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.053583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.053590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.053987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.053993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.054398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.054405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.054607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.054616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.055064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.055072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.055508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.055516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.055899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.055907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.056315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.056323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.056753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.056761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.057078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.057085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.057387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.057394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.057792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.057798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.058184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.058190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.058579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.058586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.058979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.058985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.059422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.059429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.059812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.059819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.060319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.060347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.060771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.060780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.061065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.061072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.061469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.061476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.061740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.061747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.062157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.062164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.062584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.062590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.063024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.063030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.063443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.063450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.063878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.063885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.993 [2024-07-15 22:26:58.064294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.993 [2024-07-15 22:26:58.064302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.993 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.064718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.064728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.065156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.065164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.065574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.065581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.065968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.065974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.066373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.066380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.066785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.066791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.067185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.067192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.067598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.067605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.067990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.067996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.068381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.068388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.068811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.068818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.069315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.069343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.069657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.069665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.070100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.070106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.070505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.070512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.070896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.070903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.071106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.071113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.071631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.071638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.072025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.072032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.072581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.072609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.073016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.073024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.073336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.073344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.073758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.073765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.074065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.074072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.074474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.074480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.074866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.074873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.075258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.075264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.075682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.075689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.076128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.076135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.076479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.076487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.076807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.076814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.077295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.077323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.077772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.077780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.078216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.078224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.078528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.078535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.078954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.078960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.079373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.079380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.079804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.079811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.080227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.080235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.994 qpair failed and we were unable to recover it. 00:29:32.994 [2024-07-15 22:26:58.080649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.994 [2024-07-15 22:26:58.080656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.080955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.080964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.081395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.081402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.081787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.081793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.082180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.082187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.082590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.082596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.082983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.082990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.083260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.083268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.083663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.083670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.084093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.084100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.084479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.084486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.084852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.084858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.085362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.085390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.085793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.085801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.086262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.086270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.086664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.086671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.087072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.087079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.087526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.087532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.087922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.087928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.088407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.088435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.088745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.088754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.089142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.089150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.089590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.089596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.090015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.090022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.090443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.090450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.090847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.090854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.091310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.091316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.091729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.091736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.092125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.092132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.092468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.092476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.092883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.092890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.093365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.093393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.995 [2024-07-15 22:26:58.093812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.995 [2024-07-15 22:26:58.093821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.995 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.094371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.094399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.094852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.094861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.095255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.095263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.095681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.095687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.096135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.096142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.096560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.096566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.096967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.096974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.097366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.097374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.097809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.097820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.098254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.098262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.098656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.098662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.099041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.099055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.099366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.099373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.099771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.099778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.100192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.100200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.100612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.100619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.101009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.101016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.101284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.101291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.101691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.101698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.102117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.102127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.102576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.102584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.102984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.102991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.103404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.103432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.103839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.103848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.104188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.104196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.104613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.104620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.104830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.104839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.105272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.105278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.105693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.105699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.106113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.106120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.106460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.106467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.106878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.106884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.107286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.107293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.107760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.107767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.108057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.108064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.108493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.108500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.108899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.108905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.109418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.109445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.109854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.109862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.996 qpair failed and we were unable to recover it. 00:29:32.996 [2024-07-15 22:26:58.110403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.996 [2024-07-15 22:26:58.110430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.110745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.110754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.111224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.111231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.111654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.111661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.112055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.112061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.112542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.112549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.112940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.112947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.113374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.113401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.113806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.113814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.114241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.114253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.114649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.114656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.115067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.115074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.115290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.115298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.115733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.115740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.116214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.116221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.116639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.116645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.117051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.117057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.117511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.117518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.117904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.117911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.118338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.118345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.118730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.118737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.119174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.119181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.119607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.119614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.120015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.120021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.120332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.120340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.120729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.120735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.121129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.121136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.121530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.121537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.121945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.121952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.122368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.122395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.122810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.122818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.123219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.123226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.123646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.123652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.124088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.124094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.124496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.124503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.124904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.124911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.125427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.125455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.125890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.125899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.126438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.126465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.126945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.126953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.997 [2024-07-15 22:26:58.127471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.997 [2024-07-15 22:26:58.127498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.997 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.127929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.127938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.128451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.128479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.128686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.128696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.129126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.129134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.129528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.129534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.129820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.129827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.130231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.130237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.130661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.130668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.131058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.131067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.131448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.131455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.131962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.131969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.132396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.132423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.132854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.132863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.133410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.133439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.133863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.133872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.134389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.134417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.134819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.134827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.135094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.135102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.135520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.135528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.135923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.135929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.136457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.136484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.136919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.136927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.137419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.137446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.137852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.137860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.138359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.138387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.138789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.138797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.139087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.139095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.139515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.139523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.139914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.139920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.140448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.140476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.140883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.140892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.141396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.141424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.141826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.141835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.142291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.142317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.142781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.142789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.143099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.143106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.143447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.143454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.143929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.143936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.144344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.144371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.144781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.144790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.998 qpair failed and we were unable to recover it. 00:29:32.998 [2024-07-15 22:26:58.145333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.998 [2024-07-15 22:26:58.145360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.145769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.145777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.146169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.146176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.146596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.146602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.146996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.147003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.147412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.147418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.147863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.147870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.148361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.148388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.148819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.148830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.149373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.149401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.149712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.149721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.150128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.150137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.150555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.150562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.150979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.150985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.151416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.151443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.151892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.151900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.152090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.152100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.152533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.152541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.152922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.152929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.153474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.153501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.153936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.153944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.154461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.154489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.154892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.154900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.155469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.155496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.155922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.155930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.156451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.156479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.156878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.156886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.157064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.157073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.157517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.157524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.157956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.157964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.158387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.158415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.158817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.158826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.159329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.159357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.159787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.159795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:32.999 [2024-07-15 22:26:58.160056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.999 [2024-07-15 22:26:58.160064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:32.999 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.160471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.160479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.160740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.160747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.161148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.161155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.161532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.161539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.161952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.161959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.162398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.162404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.162831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.162838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.163242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.163249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.163666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.163672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.164106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.164112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.164502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.164510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.164917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.164925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.165444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.165472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.165880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.165891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.166367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.166395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.166858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.166866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.167407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.167435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.167645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.167655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.168060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.168067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.168514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.168521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.168913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.168919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.169309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.169315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.169729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.169737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.169946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.169955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.170335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.170342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.170785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.170792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.171217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.171224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.171535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.171542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.171945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.171952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.172382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.172389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.172858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.172865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.173390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.173417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.173890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.173898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.174439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.174466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.174899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.174908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.175447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.175474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.175725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.175734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.176135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.176143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.176536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.000 [2024-07-15 22:26:58.176544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.000 qpair failed and we were unable to recover it. 00:29:33.000 [2024-07-15 22:26:58.176833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.176839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.177375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.177402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.177835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.177843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.178264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.178271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.178650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.178656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.178954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.178961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.179372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.179379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.179782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.179789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.180187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.180194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.180525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.180532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.180967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.180974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.181254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.181261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.181687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.181694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.182006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.182013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.182451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.182461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.182850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.182857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.183268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.183274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.183582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.183588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.183990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.183996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.184433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.184440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.184831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.184838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.185116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.185136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.185586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.185593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.186000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.186007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.186400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.186428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.186846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.186854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.187293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.187300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.187582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.187589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.188003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.188010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.188396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.188403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.188832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.188838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.189157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.189164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.189577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.189583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.189972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.189978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.190372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.190378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.190797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.190804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.191223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.191230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.191660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.191667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.192100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.192107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.192544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.192551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 22:26:58.192990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.001 [2024-07-15 22:26:58.192996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.001 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.193415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.193442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.193877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.193885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.194376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.194403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.194719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.194727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.195028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.195035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.195457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.195464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.195756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.195764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.196179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.196186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.196602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.196610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.196879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.196886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.197324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.197331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.197539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.197549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.197932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.197939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.198247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.198259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.198587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.198594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.198987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.198995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.199422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.199429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.199819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.199826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.200253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.200259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.200746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.200752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.201138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.201145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.201519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.201525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.201915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.201921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.202309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.202316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.202673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.202679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.203108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.203114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.203519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.203526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.203792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.203798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.204210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.204217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.204690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.204697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.205087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.205093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.205422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.205430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.205840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.205846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.206237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.206243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.206492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.206499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.206955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.206961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.207396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.207403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.207671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.207677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.208129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.208136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 22:26:58.208529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.002 [2024-07-15 22:26:58.208535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.208889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.208896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.209373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.209400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.209900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.209909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.210397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.210425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.210844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.210853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.211404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.211432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.211872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.211881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.212406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.212434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.212844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.212852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.213360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.213387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.213856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.213865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.214365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.214393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.214793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.214801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.215192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.215203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.215596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.215603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.216033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.216040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.216439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.216447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.216845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.216852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.217285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.217292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.217685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.217692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.218102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.218109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.218527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.218534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.218768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.218778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.219202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.219209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.219598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.219605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.220015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.220021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.220444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.220452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.220842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.220849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.221240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.221246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.221667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.221674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.222086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.222093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.222493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.222500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.222907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.222913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.223455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.223482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.223922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.223931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.224511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.224539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.224940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.003 [2024-07-15 22:26:58.224948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 22:26:58.225464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.225492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.225929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.225937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.226426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.226453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.226939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.226950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.227478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.227507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.227944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.227952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.228465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.228492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.228898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.228906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.229430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.229458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.229860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.229868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.230361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.230388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.230600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.230610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.231032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.231039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.231460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.231467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.231896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.231903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.232334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.232341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.232728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.232735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.233164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.233171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.233582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.233589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.233978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.233986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.234412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.234419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.234848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.234855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.235406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.235434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.235832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.235840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.236170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.236177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.236602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.236609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.236998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.237004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.237410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.237417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.237801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.237807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.238314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.238342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.238745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.238753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.239167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.239174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.239579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.239586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.239896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.239903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.240290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.240297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.240497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.240506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.240932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.004 [2024-07-15 22:26:58.240939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.004 qpair failed and we were unable to recover it. 00:29:33.004 [2024-07-15 22:26:58.241339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.241346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.241739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.241746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.242143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.242150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.242465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.242471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.242863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.242869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.243253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.243260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.243687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.243696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.244090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.244096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.244472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.244479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.244766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.244773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.245033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.245041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.245451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.245458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.245860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.245867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.246277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.246283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.246670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.246676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.247092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.247098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.247519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.247526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.247712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.247721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.248024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.248032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.248522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.248529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.248916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.248923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.249344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.249351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.249754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.249760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.250194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.250200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.250544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.250558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.250974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.250980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.251408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.251415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.251800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.251807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.252202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.252209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.252645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.252651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.253084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.253090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.253474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.253480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.253874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.253881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.254386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.254414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.254837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.254846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.255280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.255287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.255682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.255689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.256090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.256096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.256487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.256493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.256892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.256899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.005 qpair failed and we were unable to recover it. 00:29:33.005 [2024-07-15 22:26:58.257419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.005 [2024-07-15 22:26:58.257446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.257652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.257661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.258084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.258091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.258480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.258487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.258881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.258888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.259305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.259311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.259703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.259712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.260143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.260150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.260546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.260552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.260980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.260987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.261394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.261401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.261826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.261833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.262323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.262350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.262763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.262771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.263205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.263213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.263605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.263611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.264004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.264011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.264416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.264423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.264856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.264862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.264966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.264975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.265444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.265452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.265890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.265897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.266303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.266310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.266693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.266700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.267091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.267097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.267487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.267494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.267757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.267764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.268242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.268249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.268648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.268654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.269047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.269054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.269267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.269276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.269615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.269622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.270060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.270067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.270479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.270486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.270874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.270880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.271277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.271284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.271685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.271692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.271999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.272006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.272429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.272435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.272822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.272830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.273344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.006 [2024-07-15 22:26:58.273371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-07-15 22:26:58.273881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.273889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.274294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.274321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.274630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.274639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.275073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.275080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.275491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.275498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.275890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.275901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.276219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.276226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.276627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.276634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.276916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.276923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.277328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.277336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.277744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.277751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.278212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.278219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.278406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.278415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.278724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.278731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.279036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.279043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.279477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.279484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.279953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.279959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.280357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.280364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.280788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.280795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.281182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.281188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.281617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.281624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.281923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.281930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.282337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.282343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.282733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.282740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.283220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.283228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.283670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.283677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.284076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.284082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.284469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.284476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.284861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.284867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.285174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.285181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.285601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.285607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.286005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.286012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.286480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.286487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.286682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.286690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.287110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.287116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.287403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.287409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.287766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.287772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.288202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.288209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.288635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.288641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.289045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.289052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-07-15 22:26:58.289463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.007 [2024-07-15 22:26:58.289470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.289878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.289884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.290291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.290297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.290621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.290628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.291053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.291060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.291486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.291495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.291920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.291927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.292336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.292343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.292735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.292742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.293012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.293020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.293441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.293448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.293870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.293877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.294350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.294356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.294740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.294747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.295218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.295224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.295611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.295617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.296041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.296047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.296500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.296507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.296930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.296937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.297435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.297463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.297783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.297793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.298213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.298220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.298609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.298615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.299017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.299023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.299447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.299453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.299826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.299833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.300152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.300159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.300537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.300543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.300721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.300731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.301140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.301148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.301577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.301583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.302058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.302064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.302362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.302369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.302787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.302794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.303197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.008 [2024-07-15 22:26:58.303204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-07-15 22:26:58.303503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.009 [2024-07-15 22:26:58.303516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.009 qpair failed and we were unable to recover it. 00:29:33.009 [2024-07-15 22:26:58.303943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.009 [2024-07-15 22:26:58.303950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.009 qpair failed and we were unable to recover it. 00:29:33.009 [2024-07-15 22:26:58.304370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.009 [2024-07-15 22:26:58.304377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.009 qpair failed and we were unable to recover it. 00:29:33.009 [2024-07-15 22:26:58.304787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.009 [2024-07-15 22:26:58.304793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.009 qpair failed and we were unable to recover it. 00:29:33.280 [2024-07-15 22:26:58.305099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.280 [2024-07-15 22:26:58.305106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.280 qpair failed and we were unable to recover it. 00:29:33.280 [2024-07-15 22:26:58.305505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.280 [2024-07-15 22:26:58.305512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.280 qpair failed and we were unable to recover it. 00:29:33.280 [2024-07-15 22:26:58.305905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.280 [2024-07-15 22:26:58.305911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.280 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.306341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.306348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.306716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.306723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.307118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.307128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.307523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.307532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.307916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.307922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.308474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.308501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.308930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.308939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.309442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.309469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.309872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.309881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.310339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.310367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.310807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.310816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.311143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.311158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.311590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.311596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.311754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.311762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.312230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.312237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.312639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.312645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.313034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.313041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.313368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.313375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.313812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.313820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.314243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.314251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.314652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.314659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.314971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.314978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.315401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.315408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.315837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.315843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.316237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.316244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.316623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.316629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.317055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.317061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.317444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.317451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.317832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.317838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.318223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.318230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.318629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.318636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.319044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.319051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.319465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.319472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.319908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.319914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.320324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.320331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.320733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.320739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.321197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.321204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.321604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.321610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.321785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.321792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.281 [2024-07-15 22:26:58.322085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.281 [2024-07-15 22:26:58.322091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.281 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.322504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.322511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.322930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.322936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.323346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.323353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.323752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.323760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.324219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.324226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.324614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.324620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.325001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.325008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.325399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.325406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.325813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.325819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.326243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.326249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.326629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.326636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.327042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.327049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.327435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.327442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.327874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.327881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.328307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.328313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.328701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.328707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.329098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.329104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.329495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.329501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.329809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.329816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.329983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.329994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.330395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.330402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.330827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.330834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.331329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.331355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.331756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.331764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.332145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.332152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.332595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.332602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.332997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.333003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.333394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.333401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.333824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.333832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.334347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.334375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.334781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.334790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.335191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.335198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.335608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.335615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.335992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.335999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.336097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.336106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.336504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.336511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.336828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.336835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.337253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.337260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.337662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.337668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.338095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.282 [2024-07-15 22:26:58.338101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.282 qpair failed and we were unable to recover it. 00:29:33.282 [2024-07-15 22:26:58.338506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.338513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.338900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.338908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.339429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.339456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.339889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.339901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.340415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.340442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.340864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.340872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.341353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.341381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.341684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.341694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.342008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.342015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.342426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.342433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.342823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.342830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.343220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.343227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.343536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.343543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.344038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.344044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.344450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.344457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.344872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.344879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.345288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.345295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.345703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.345710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.346101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.346108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.346542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.346549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.346983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.346990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.347494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.347522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.347925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.347933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.348482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.348510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.348911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.348920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.349422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.349449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.349827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.349835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.350335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.350362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.350773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.350781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.351182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.351189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.351576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.351583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.352046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.352053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.352467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.352475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.352950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.352957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.353431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.353458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.353864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.353872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.354363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.354390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.354708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.354716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.354917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.354927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.355300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.355307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.283 qpair failed and we were unable to recover it. 00:29:33.283 [2024-07-15 22:26:58.355572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.283 [2024-07-15 22:26:58.355580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.355985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.355992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.356386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.356393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.356779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.356788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.357086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.357093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.357507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.357514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.357710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.357719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.358133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.358139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.358526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.358532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.358932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.358938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.359366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.359372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.359757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.359763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.360146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.360154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.360552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.360558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.360944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.360950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.361378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.361384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.361781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.361787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.362090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.362097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.362521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.362527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.362921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.362927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.363401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.363429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.363768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.363776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.364184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.364192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.364592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.364599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.364999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.365006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.365426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.365433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.365828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.365834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.366339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.366366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.366772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.366780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.367169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.367176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.367572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.367579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.368015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.368022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.368450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.368457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.368851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.368858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.369254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.369261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.369686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.369694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.369988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.369996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.370400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.370406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.370802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.370808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.371299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.371327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.284 [2024-07-15 22:26:58.371734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.284 [2024-07-15 22:26:58.371742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.284 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.372178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.372185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.372580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.372587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.372995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.373005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.373400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.373407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.373796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.373802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.374293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.374320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.374727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.374735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.375118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.375130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.375546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.375553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.375944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.375952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.376446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.376474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.376882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.376891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.377465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.377492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.377899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.377907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.378445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.378472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.378872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.378880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.379402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.379430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.379884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.379893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.380418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.380445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.380846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.380855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.381355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.381382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.381809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.381817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.382209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.382216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.382605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.382611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.383046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.383053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.383503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.383510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.383936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.383944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.384350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.384377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.384687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.384695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.385137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.385145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.385538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.385545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.285 [2024-07-15 22:26:58.385924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.285 [2024-07-15 22:26:58.385931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.285 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.386405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.386432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.386862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.386870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.387360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.387387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.387798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.387806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.388200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.388207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.388516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.388523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.388959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.388966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.389394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.389401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.389726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.389733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.390183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.390190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.390603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.390613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.391024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.391030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.391457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.391463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.391860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.391867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.392265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.392272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.392696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.392702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.392969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.392976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.393389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.393396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.393823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.393829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.394039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.394049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.394339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.394347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.394774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.394780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.395165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.395172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.395574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.395580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.396090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.396096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.396527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.396534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.396960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.396967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.397487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.397515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.397918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.397926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.398363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.398390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.398600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.398610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.399004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.399011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.399406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.399413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.399810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.399817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.400209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.400216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.400605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.400612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.401035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.401042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.401442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.401450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.401886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.401893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.402333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.286 [2024-07-15 22:26:58.402340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.286 qpair failed and we were unable to recover it. 00:29:33.286 [2024-07-15 22:26:58.402650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.402657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.403066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.403072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.403461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.403467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.403887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.403893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.404276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.404283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.404692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.404699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.405134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.405141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.405536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.405543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.405948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.405954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.406452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.406480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.406910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.406921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.407413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.407441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.407847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.407855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.408273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.408300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.408752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.408760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.409153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.409160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.409574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.409580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.410004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.410011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.410437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.410444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.410869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.410876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.411095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.411105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.411535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.411543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.411851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.411859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.412259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.412266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.412636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.412643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.413040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.413047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.413451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.413458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.413854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.413860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.414258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.414265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.414681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.414688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.415124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.415131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.415565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.415572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.415877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.415884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.416315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.416342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.416807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.416815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.417283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.417291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.417678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.417685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.418074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.418081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.418481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.418488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.418909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.418916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.287 [2024-07-15 22:26:58.419320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.287 [2024-07-15 22:26:58.419347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.287 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.419750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.419758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.420147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.420154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.420583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.420590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.421006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.421012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.421412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.421420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.421729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.421736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.422132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.422139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.422526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.422533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.422912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.422919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.423345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.423355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.423750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.423757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.424066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.424073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.424484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.424491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.424768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.424776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.425214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.425221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.425632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.425639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.426045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.426052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.426472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.426479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.426863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.426870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.427278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.427285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.427685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.427691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.428061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.428067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.428486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.428493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.428911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.428918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.429397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.429425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.429855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.429864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.430181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.430189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.430610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.430618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.431057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.431063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.431516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.431523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.431924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.431930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.432338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.432366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.432679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.432687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.433094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.433100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.433573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.433580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.433776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.433785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.434212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.434220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.434631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.434637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.435027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.435033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.435425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.288 [2024-07-15 22:26:58.435432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.288 qpair failed and we were unable to recover it. 00:29:33.288 [2024-07-15 22:26:58.435850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.435856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.436256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.436263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.436526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.436532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.436959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.436966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.437263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.437270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.437473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.437481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.437848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.437855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.438267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.438274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.438580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.438587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.438998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.439004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.439398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.439406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.439900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.439906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.440094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.440102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.440519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.440526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.440917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.440924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.441434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.441462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.441760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.441769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.442178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.442185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.442587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.442594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.442986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.442993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.443472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.443479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.443893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.443899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.444108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.444118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.444393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.444401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.444692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.444699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.445143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.445151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.445443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.445450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.445864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.445870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.446259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.446265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.446565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.446571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.447012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.447018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.447493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.447499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.447886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.447892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.448361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.448367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.448756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.289 [2024-07-15 22:26:58.448762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.289 qpair failed and we were unable to recover it. 00:29:33.289 [2024-07-15 22:26:58.449173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.449180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.449459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.449467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.449778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.449785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.450126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.450132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.450528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.450534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.450925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.450931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.451323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.451330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.451752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.451758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.452176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.452182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.452608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.452614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.453057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.453063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.453374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.453381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.453741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.453747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.454177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.454184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.454642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.454649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.455079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.455086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.455502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.455508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.455936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.455943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.456369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.456376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.456778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.456784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.457349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.457376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.457865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.457873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.458331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.458358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.458825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.458833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.459100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.459108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.459530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.459539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.459869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.459875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.460372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.460400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.460826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.460835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.461349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.461376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.461812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.461821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.462254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.462261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.462679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.462686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.463080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.463086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.463479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.463486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.463879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.463885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.464389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.464417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.464765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.464774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.465042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.465049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.465443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.465451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.290 qpair failed and we were unable to recover it. 00:29:33.290 [2024-07-15 22:26:58.465863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.290 [2024-07-15 22:26:58.465870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.466285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.466294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.466690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.466696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.467089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.467095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.467517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.467524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.467726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.467735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.468126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.468134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.468513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.468519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.468903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.468909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.469405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.469433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.469827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.469836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.470340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.470368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.470577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.470587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.471011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.471018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.471428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.471435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.471748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.471755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.472056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.472062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.472475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.472482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.472909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.472916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.473332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.473338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.473746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.473752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.474145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.474151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.474544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.474550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.474938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.474944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.475228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.475235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.475656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.475663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.476089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.476097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.476502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.476509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.476934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.476941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.477441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.477468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.477900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.477908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.478409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.478437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.478851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.478858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.479361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.479388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.479847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.479855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.480348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.480376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.480774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.480783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.481093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.481101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.481527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.481535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.481977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.481984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.482440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.291 [2024-07-15 22:26:58.482468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-07-15 22:26:58.482861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.482873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.483399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.483426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.483853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.483862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.484303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.484330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.484727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.484736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.485135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.485144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.485572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.485579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.485986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.485993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.486404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.486411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.486835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.486842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.487317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.487349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.487670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.487678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.488091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.488098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.488495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.488502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.488888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.488895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.489396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.489423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.489834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.489842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.490233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.490240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.490683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.490690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.491001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.491008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.491443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.491449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.491840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.491847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.492260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.492267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.492683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.492690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.493106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.493113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.493537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.493544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.493972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.493979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.494530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.494558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.494973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.494981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.495402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.495430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.495848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.495857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.496438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.496465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.496866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.496875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.497368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.497396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.497826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.497835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.498324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.498352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.498756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.498764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.499066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.499073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.499371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.499378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.499653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.499659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.292 qpair failed and we were unable to recover it. 00:29:33.292 [2024-07-15 22:26:58.500049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.292 [2024-07-15 22:26:58.500059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.500445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.500452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.500838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.500845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.501251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.501257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.501665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.501672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.502059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.502065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.502356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.502363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.502737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.502745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.503154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.503161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.503599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.503606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.503895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.503902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.504300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.504307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.504711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.504718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.505108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.505114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.505529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.505535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.506045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.506052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.506438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.506446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.506841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.506848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.507235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.507243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.507428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.507438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.507816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.507823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.508092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.508100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.508486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.508494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.508892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.508899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.509111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.509118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.509536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.509543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.509971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.509978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.510495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.510523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.510989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.510998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.511438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.511466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.511900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.511909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.512345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.512374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.512795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.512805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.513348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.513376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.513783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.293 [2024-07-15 22:26:58.513791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.293 qpair failed and we were unable to recover it. 00:29:33.293 [2024-07-15 22:26:58.514178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.514186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.514569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.514576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.514960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.514967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.515210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.515217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.515637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.515643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.516061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.516071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.516469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.516477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.516907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.516914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.517341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.517347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.517782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.517788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.518358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.518385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.518803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.518811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.519200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.519208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.519603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.519610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.520019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.520026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.520455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.520462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.520848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.520855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.521245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.521252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.521696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.521702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.522131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.522138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.522540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.522547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.522969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.522975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.523291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.523299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.523708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.523714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.524098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.524104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.524527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.524534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.524893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.524900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.525420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.525447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.525917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.525925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.526429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.526457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.526790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.526798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.527186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.527194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.527539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.527545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.527933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.527940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.528348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.528355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.528775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.528781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.529092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.529099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.529481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.529488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.529874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.529880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.530489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.530517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.294 [2024-07-15 22:26:58.530909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.294 [2024-07-15 22:26:58.530917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.294 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.531408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.531435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.531827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.531835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.532355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.532382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.532781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.532790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.533204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.533214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.533616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.533622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.534064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.534070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.534492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.534499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.534912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.534919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.535370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.535397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.535804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.535813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.536224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.536231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.536632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.536639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.537074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.537081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.537535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.537541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.537930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.537936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.538461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.538488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.538923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.538931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.539320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.539346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.539808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.539817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.540314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.540341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.540745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.540754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.541066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.541073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.541487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.541494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.541929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.541936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.542351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.542378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.542807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.542815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.543131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.543140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.543610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.543617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.544007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.544013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.544506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.544533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.544928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.544936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.545436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.545463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.545870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.545878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.546381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.546409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.546838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.546846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.547370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.547398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.547711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.547719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.547935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.547951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.548332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.295 [2024-07-15 22:26:58.548340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.295 qpair failed and we were unable to recover it. 00:29:33.295 [2024-07-15 22:26:58.548721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.548727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.549127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.549135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.549535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.549541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.549937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.549943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.550457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.550487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.550803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.550813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.551294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.551322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.551722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.551730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.552115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.552126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.552559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.552565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.552973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.552979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.553527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.553555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.553964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.553972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.554500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.554527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.554930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.554938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.555459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.555487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.555888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.555896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.556394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.556422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.556633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.556643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.557019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.557027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.557463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.557470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.557863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.557871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.558254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.558261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.558657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.558663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.559068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.559074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.559467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.559474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.559855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.559861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.560267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.560274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.560685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.560691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.560993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.561000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.561436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.561442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.561831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.561838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.562325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.562352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.562755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.562763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.563068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.563075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.563242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.563252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.563677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.563684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.563973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.563981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.564421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.564427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.564840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.564847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.296 [2024-07-15 22:26:58.565344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.296 [2024-07-15 22:26:58.565371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.296 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.565580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.565589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.565972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.565979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.566387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.566394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.566827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.566837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.567229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.567236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.567602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.567609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.568022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.568029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.568444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.568450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.568848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.568854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.569244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.569252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.569650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.569657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.570061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.570068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.570378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.570385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.570777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.570783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.571073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.571080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.571486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.571493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.571882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.571888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.572294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.572301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.572710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.572716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.573142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.573149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.573550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.573557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.573946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.573952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.574353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.574360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.574774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.574780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.575196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.575202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.575469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.575476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.575684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.575693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.576023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.576029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.576431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.576439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.576753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.576760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.577157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.577164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.577587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.577594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.577982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.577988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.578299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.578307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.578717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.578724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.579187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.579193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.579591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.579597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.580008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.580015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.580445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.580451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.580878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.297 [2024-07-15 22:26:58.580884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.297 qpair failed and we were unable to recover it. 00:29:33.297 [2024-07-15 22:26:58.581272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.581279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.581692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.581699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.582088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.582094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.582550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.582559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.582991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.582998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.583395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.583423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.583829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.583837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.584343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.584370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.584769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.584778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.585097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.585104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.585304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.585314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.585685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.585693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.586125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.586132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.586526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.586532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.586918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.586924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.587358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.587385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.587591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.587601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.587989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.587996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.588414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.588421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.588830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.588838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.589414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.589442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.589852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.589861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.590171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.590178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.590595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.590601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.590821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.590830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.591272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.591285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.591698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.591704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.592112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.592119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.592524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.592530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.592938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.592945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.593483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.593511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.298 [2024-07-15 22:26:58.593956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.298 [2024-07-15 22:26:58.593964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.298 qpair failed and we were unable to recover it. 00:29:33.569 [2024-07-15 22:26:58.594510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.569 [2024-07-15 22:26:58.594538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.569 qpair failed and we were unable to recover it. 00:29:33.569 [2024-07-15 22:26:58.594941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.569 [2024-07-15 22:26:58.594949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.569 qpair failed and we were unable to recover it. 00:29:33.569 [2024-07-15 22:26:58.595029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.595038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.595340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.595347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.595658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.595665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.595956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.595964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.596360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.596367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.596796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.596802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.597211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.597217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.597603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.597610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.597900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.597907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.598299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.598309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.598700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.598706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.599135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.599142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.599558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.599564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.599982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.599988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.600369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.600375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.600689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.600696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.600895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.600904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.601206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.601213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.601482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.601489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.601901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.601909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.602378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.602385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.602791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.602797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.603184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.603190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.603583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.603590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.603977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.603984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.604390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.604397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.604812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.604819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.605248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.605255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.605642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.605648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.606033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.606039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.606438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.606445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.606873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.606879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.607270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.607277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.607694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.607700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.608093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.608100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.608528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.608535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.608933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.608940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.609436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.609463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.609866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.609875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.610412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.570 [2024-07-15 22:26:58.610440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.570 qpair failed and we were unable to recover it. 00:29:33.570 [2024-07-15 22:26:58.610872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.610880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.611382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.611409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.611813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.611822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.612335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.612362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.612764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.612772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.613090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.613097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.613522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.613529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.613963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.613969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.614365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.614393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.614798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.614810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.615326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.615353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.615660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.615668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.616105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.616112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.616505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.616512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.616900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.616907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.617410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.617437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.617870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.617878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.618394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.618422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.618823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.618832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.619206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.619213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.619635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.619641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.620046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.620053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.620467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.620474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.620900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.620907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.621383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.621410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.621737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.621746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.622146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.622153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.622553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.622559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.622863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.622870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.623281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.623288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.623678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.623684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.624077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.624083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.624558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.624565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.624961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.624968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.625444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.625471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.625914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.625922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.626490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.626518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.626919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.626927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.627441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.627468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.627682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.571 [2024-07-15 22:26:58.627691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.571 qpair failed and we were unable to recover it. 00:29:33.571 [2024-07-15 22:26:58.628081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.628088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.628408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.628415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.628825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.628831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.629222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.629229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.629615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.629621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.630008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.630015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.630426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.630433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.630930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.630937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.631246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.631254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.631705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.631714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.632100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.632106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.632497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.632504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.632899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.632905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.633397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.633425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.633824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.633832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.634355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.634382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.634811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.634819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.635208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.635216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.635628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.635636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.636065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.636071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.636464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.636470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.636688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.636697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.637006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.637013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.637278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.637285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.637725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.637731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.637993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.638000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.638343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.638350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.638760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.638766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.639061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.639067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.639487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.639494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.639894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.639900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.640331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.640337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.640728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.640735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.641071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.641078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.641501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.641508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.641905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.641912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.642452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.642480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.642898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.642907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.643303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.643330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.643737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.572 [2024-07-15 22:26:58.643745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.572 qpair failed and we were unable to recover it. 00:29:33.572 [2024-07-15 22:26:58.644137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.644144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.644577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.644583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.644973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.644979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.645448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.645454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.645865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.645872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.646380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.646407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.646805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.646813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.647246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.647254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.647686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.647693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.648019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.648025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.648234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.648244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.648546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.648553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.648949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.648955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.649378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.649385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.649779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.649786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.650100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.650107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.650406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.650413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.650818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.650825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.651222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.651229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.651621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.651627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.652013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.652020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.652457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.652464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.652868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.652875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.653252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.653259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.653690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.653697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.654097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.654104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.654520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.654527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.654954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.654961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.655481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.655508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.655913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.655922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.656494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.656521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.656952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.656960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.657474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.657502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.657964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.657972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.658484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.658512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.658949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.658958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.659454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.659484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.659962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.659971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.660467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.660494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.660974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.660983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.573 [2024-07-15 22:26:58.661504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.573 [2024-07-15 22:26:58.661531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.573 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.661933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.661941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.662452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.662480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.662911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.662920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.663136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.663147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.663608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.663615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.664032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.664040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.664448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.664456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.664885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.664893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.665474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.665501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.665899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.665907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.666475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.666503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.666932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.666941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.667443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.667470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.667925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.667933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.668467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.668495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.668930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.668939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.669442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.669469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.669933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.669942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.670410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.670438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.670878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.670887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.671396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.671423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.671921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.671928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.672421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.672449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.672933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.672941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.673378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.673405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.673811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.673819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.674333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.674361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.674768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.674778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.675189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.675196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.675373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.675382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.675818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.675824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.676218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.676225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.676627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.574 [2024-07-15 22:26:58.676634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.574 qpair failed and we were unable to recover it. 00:29:33.574 [2024-07-15 22:26:58.677042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.677049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.677358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.677364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.677577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.677586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.678042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.678050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.678469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.678476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.678904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.678912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.679326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.679333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.679724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.679731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.680164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.680171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.680457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.680464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.680860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.680867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.681258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.681265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.681681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.681688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.682090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.682096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.682559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.682566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.682953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.682959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.683465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.683492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.683969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.683977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.684374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.684400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.684841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.684849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.685446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.685474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.685828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.685837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.686251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.686258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.686621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.686628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.686922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.686928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.687333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.687340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.687651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.687659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.688076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.688082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.688426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.688433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.688865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.688871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.689296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.689303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.689696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.689702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.690097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.690103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.690515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.690521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.690933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.690939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.691316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.691344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.691760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.691768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.692180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.692187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.692649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.692656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.693067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.575 [2024-07-15 22:26:58.693074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.575 qpair failed and we were unable to recover it. 00:29:33.575 [2024-07-15 22:26:58.693546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.693553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.693959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.693966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.694394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.694426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.694892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.694901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.695408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.695436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.695892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.695900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.696120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.696133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.696539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.696545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.696943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.696949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.697125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.697132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.697634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.697662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.697970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.697980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.698365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.698393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.698828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.698836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.699366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.699394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.699811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.699819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.700121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.700134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.700329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.700339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.700722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.700729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.701327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.701354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.701793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.701801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.702189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.702196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.702463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.702470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.702759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.702766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.703201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.703208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.703645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.703652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.703951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.703957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.704321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.704328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.704739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.704745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.705145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.705153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.705650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.705657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.706064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.706070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.706462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.706468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.706864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.706871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.707302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.707309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.707702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.707708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.708112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.708118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.708526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.708533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.708952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.708958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.709460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.709487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.576 [2024-07-15 22:26:58.709895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.576 [2024-07-15 22:26:58.709903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.576 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.710353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.710381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.710825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.710836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.711336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.711363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.711826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.711834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.712307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.712335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.712816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.712824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.713251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.713259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.713673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.713680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.714110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.714116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.714387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.714395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.714877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.714883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.715381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.715409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.715841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.715849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.716403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.716430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.716835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.716844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.717114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.717121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.717499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.717506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.717972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.717978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.718489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.718516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.719007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.719015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.719403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.719410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.719809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.719816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.720327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.720354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.720760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.720769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.721155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.721163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.721586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.721592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.721977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.721984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.722385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.722391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.722781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.722788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.723176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.723183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.723377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.723387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.723905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.723911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.724304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.724310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.724720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.724726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.724925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.724933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.725243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.725251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.725652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.725659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.726091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.726098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.726504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.726511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.577 [2024-07-15 22:26:58.726921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.577 [2024-07-15 22:26:58.726928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.577 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.727400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.727407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.727800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.727810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.728327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.728354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.728759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.728767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.729156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.729163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.729555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.729562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.729949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.729956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.730369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.730376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.730808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.730815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.731208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.731214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.731612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.731618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.732002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.732009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.732394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.732401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.732786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.732793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.733173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.733180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.733578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.733585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.733973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.733980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.734162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.734173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.734581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.734588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.734992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.734999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.735422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.735429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.735855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.735861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.736365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.736392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.736805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.736813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.737203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.737210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.737601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.737607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.738003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.738010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.738405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.738412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.738809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.738817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.739263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.739270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.739580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.739587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.739771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.739780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.740068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.740075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.740468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.740475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.578 [2024-07-15 22:26:58.740863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.578 [2024-07-15 22:26:58.740869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.578 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.741175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.741183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.741621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.741627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.741930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.741937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.742353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.742360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.742748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.742754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.743181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.743188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.743616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.743624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.744008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.744014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.744441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.744448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.744835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.744842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.745305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.745313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.745714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.745721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.746124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.746131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.746517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.746523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.746933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.746940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.747325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.747353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.747761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.747769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.747973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.747982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.748404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.748411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.748800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.748807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.749196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.749203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.749589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.749596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.750048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.750055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.750496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.750503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.750911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.750919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.751355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.751382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.751813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.751822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.752241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.752248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.752642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.752649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.753076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.753082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.753466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.753472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.753858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.753865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.754250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.754257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.754691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.754698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.755159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.755167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.755575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.755582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.755961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.755967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.756353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.756360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.756758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.756765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.757141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.579 [2024-07-15 22:26:58.757148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.579 qpair failed and we were unable to recover it. 00:29:33.579 [2024-07-15 22:26:58.757512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.757519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.757937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.757943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.758338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.758345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.758759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.758765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.759097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.759104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.759530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.759537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.759927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.759935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.760459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.760487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.760891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.760899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.761414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.761441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.761878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.761886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.762412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.762440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.762838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.762847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.763324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.763352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.763556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.763566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.763953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.763960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.764351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.764358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.764750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.764756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.765148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.765154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.765558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.765564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.765952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.765959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.766470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.766478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.766778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.766785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.767212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.767219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.767608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.767614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.767999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.768005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.768343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.768350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.768755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.768761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.768954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.768962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.769366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.769373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.769664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.769671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.770079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.770085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.770397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.770404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.770831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.770837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.771223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.771230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.771649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.771655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.772066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.772073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.772488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.772494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.772969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.772975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.773471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.773498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.580 [2024-07-15 22:26:58.773824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.580 [2024-07-15 22:26:58.773832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.580 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.774258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.774265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.774671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.774677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.774979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.774986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.775404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.775411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.775805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.775811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.776314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.776346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.776559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.776569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.776989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.776996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.777392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.777399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.777805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.777811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.778325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.778352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.778753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.778762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.778916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.778925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.779323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.779331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.779626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.779633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.780014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.780021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.780401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.780408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.780850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.780857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.781269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.781275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.781666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.781672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.781875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.781884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.782297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.782305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.782680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.782686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.783096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.783103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.783488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.783495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.783918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.783924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.784301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.784308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.784723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.784730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.785158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.785165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.785555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.785561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.785882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.785888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.786182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.786189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.786614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.786620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.787010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.787016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.787481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.787488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.787900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.787907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.788343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.788350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.788833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.788839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.788942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.788950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.789342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.581 [2024-07-15 22:26:58.789349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.581 qpair failed and we were unable to recover it. 00:29:33.581 [2024-07-15 22:26:58.789761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.789767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.790162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.790170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.790608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.790615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.790999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.791005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.791395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.791401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.791793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.791801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.792269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.792275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.792426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.792433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.792868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.792874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.793272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.793278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.793687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.793693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.794110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.794117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.794219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.794227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.794599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.794606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.794990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.794997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.795428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.795435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.795835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.795842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.796245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.796252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.796637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.796643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.796838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.796845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.797264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.797270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.797654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.797660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.797995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.798002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.798405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.798412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.798821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.798827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.799137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.799145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.799548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.799554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.799939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.799945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.800144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.800152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.800546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.800552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.800978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.800984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.801368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.801374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.801680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.801687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.802106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.802113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.802522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.582 [2024-07-15 22:26:58.802529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.582 qpair failed and we were unable to recover it. 00:29:33.582 [2024-07-15 22:26:58.802952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.802959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.803456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.803483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.803885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.803893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.804406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.804433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.804862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.804870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.805359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.805387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.805788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.805796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.806187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.806194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.806586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.806592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.807001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.807007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.807486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.807496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.807771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.807778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.808162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.808169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.808584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.808591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.808995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.809001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.809391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.809397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.809784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.809790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.810178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.810185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.810615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.810622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.810816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.810825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.811235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.811243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.811628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.811634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.812056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.812063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.812361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.812368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.812794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.812801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.813185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.813192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.813584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.813590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.813977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.813983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.814357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.814364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.814773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.814780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.815096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.815103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.815507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.815513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.815814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.815821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.816228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.816234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.816647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.816653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.817132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.817139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.817323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.817331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.817754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.817761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.818158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.818165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.818550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.583 [2024-07-15 22:26:58.818556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.583 qpair failed and we were unable to recover it. 00:29:33.583 [2024-07-15 22:26:58.818944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.818950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.819374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.819381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.819865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.819872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.820263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.820270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.820666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.820672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.821083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.821090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.821398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.821405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.821717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.821723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.821923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.821931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.822349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.822356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.822739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.822747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.823139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.823145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.823573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.823579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.823984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.823990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.824398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.824406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.824672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.824679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.825111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.825118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.825512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.825519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.825948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.825954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.826455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.826483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.826915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.826924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.827419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.827446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.827856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.827864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.828376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.828404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.828807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.828815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.829328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.829355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.829757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.829765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.830161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.830169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.830562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.830568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.830769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.830778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.831203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.831211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.831650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.831657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.832089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.832096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.832506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.832513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.832901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.832907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.833333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.833340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.833531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.833539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.833944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.833951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.834346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.834353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.834787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.834794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.584 qpair failed and we were unable to recover it. 00:29:33.584 [2024-07-15 22:26:58.835183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.584 [2024-07-15 22:26:58.835190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.835599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.835607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.836016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.836023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.836495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.836503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.836897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.836903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.837285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.837292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.837714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.837720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.838108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.838115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.838541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.838548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.838973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.838980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.839404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.839434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.839848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.839857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.840364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.840392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.840840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.840848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.841353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.841380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.841862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.841870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.842388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.842416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.842815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.842823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.843240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.843247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.843635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.843641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.844029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.844035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.844385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.844392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.844662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.844669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.845095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.845102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.845488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.845495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.845921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.845928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.846334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.846342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.846765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.846772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.847197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.847204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.847591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.847598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.847983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.847989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.848187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.848197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.848596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.848603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.848990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.848996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.849381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.849388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.849773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.849779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.850178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.850184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.850609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.850615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.851019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.851025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.851431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.851438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.851849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.585 [2024-07-15 22:26:58.851855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.585 qpair failed and we were unable to recover it. 00:29:33.585 [2024-07-15 22:26:58.852250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.852262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.852665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.852671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.853064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.853071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.853471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.853477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.853904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.853911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.854185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.854192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.854597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.854604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.854998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.855004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.855476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.855483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.855880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.855887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.856392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.856419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.856853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.856862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.857358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.857385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.857803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.857811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.858323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.858351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.858777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.858785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.859180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.859187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.859491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.859498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.859892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.859899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.860325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.860332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.860716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.860722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.861166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.861173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.861583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.861589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.862024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.862031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.862336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.862344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.862647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.862653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.863049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.863055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.863443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.863451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.863855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.863861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.864247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.864254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.864452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.864462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.864770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.864777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.865076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.865084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.865519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.865526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.865922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.865928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.866357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.866364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.866671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.866682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.866972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.866979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.867399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.867405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.867669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.867677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-07-15 22:26:58.868085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.586 [2024-07-15 22:26:58.868092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.868476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.868482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.868869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.868875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.869240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.869247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.869422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.869430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.869920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.869927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.870311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.870317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.870725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.870731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.871115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.871132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.871544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.871551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.871762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.871770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.872161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.872170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.872571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.872577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.872964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.872971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.873371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.873378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.873772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.873778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.874194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.874201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.874609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.874615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.875056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.875064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.875461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.875468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.875927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.875933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.876331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.876338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.876727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.876734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.877161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.877168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.877562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.877568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.877958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.877965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.878369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.878376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.878811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.878818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.879313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.879340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.879805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.879813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.880207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.880215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.880661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.880667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.881045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.881051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.881470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.881477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-07-15 22:26:58.881764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.587 [2024-07-15 22:26:58.881770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.588 [2024-07-15 22:26:58.882166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.588 [2024-07-15 22:26:58.882172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.588 qpair failed and we were unable to recover it. 00:29:33.588 [2024-07-15 22:26:58.882596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.588 [2024-07-15 22:26:58.882607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.588 qpair failed and we were unable to recover it. 00:29:33.588 [2024-07-15 22:26:58.882900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.588 [2024-07-15 22:26:58.882906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.588 qpair failed and we were unable to recover it. 00:29:33.588 [2024-07-15 22:26:58.883328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.588 [2024-07-15 22:26:58.883334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.588 qpair failed and we were unable to recover it. 00:29:33.588 [2024-07-15 22:26:58.883749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.588 [2024-07-15 22:26:58.883755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.588 qpair failed and we were unable to recover it. 00:29:33.588 [2024-07-15 22:26:58.883960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.588 [2024-07-15 22:26:58.883969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.588 qpair failed and we were unable to recover it. 00:29:33.859 [2024-07-15 22:26:58.884418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.859 [2024-07-15 22:26:58.884426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-07-15 22:26:58.884821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.859 [2024-07-15 22:26:58.884827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-07-15 22:26:58.885220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.859 [2024-07-15 22:26:58.885228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-07-15 22:26:58.885517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.859 [2024-07-15 22:26:58.885524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-07-15 22:26:58.885827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.859 [2024-07-15 22:26:58.885834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-07-15 22:26:58.886265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.859 [2024-07-15 22:26:58.886272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-07-15 22:26:58.886679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.859 [2024-07-15 22:26:58.886685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-07-15 22:26:58.887095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.859 [2024-07-15 22:26:58.887101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-07-15 22:26:58.887489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.859 [2024-07-15 22:26:58.887497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-07-15 22:26:58.887933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.859 [2024-07-15 22:26:58.887939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-07-15 22:26:58.888325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.859 [2024-07-15 22:26:58.888332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-07-15 22:26:58.888745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.859 [2024-07-15 22:26:58.888751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-07-15 22:26:58.889160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.859 [2024-07-15 22:26:58.889167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.889557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.889563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.889937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.889944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.890356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.890362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.890772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.890779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.891185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.891192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.891587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.891593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.892019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.892025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.892445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.892451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.892840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.892846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.893227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.893234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.893663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.893669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.894053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.894059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.894487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.894493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.894702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.894710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.895085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.895092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.895520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.895527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.895992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.895999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.896484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.896511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.896914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.896922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.897411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.897439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.897870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.897878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.898331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.898358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.898766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.898777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.899220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.899228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.899618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.899624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.900016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.900024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.900448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.900455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.900864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.900871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.901307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.901314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.901700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.901706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.902120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.902130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.902533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.902539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.902930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.902936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.903375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.903402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.903709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.903717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.904130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.904138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.904534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.904541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.904983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.904989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.905486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.905513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-07-15 22:26:58.905918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.860 [2024-07-15 22:26:58.905926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.906442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.906469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.906896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.906904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.907409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.907437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.907837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.907845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.908389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.908416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.908892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.908900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.909423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.909450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.909746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.909754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.910169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.910176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.910569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.910576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.910966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.910972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.911377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.911384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.911675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.911690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.912081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.912089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.912504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.912511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.912906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.912913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.913430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.913458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.913644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.913652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.914091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.914098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.914516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.914523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.914932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.914939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-07-15 22:26:58.915376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.915403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2962113 Killed "${NVMF_APP[@]}" "$@" 00:29:33.861 [2024-07-15 22:26:58.915856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.915866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 22:26:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:33.861 [2024-07-15 22:26:58.916470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.916497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 22:26:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:33.861 [2024-07-15 22:26:58.916901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.916910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 22:26:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:33.861 22:26:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:33.861 [2024-07-15 22:26:58.917409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.917437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 22:26:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:33.861 [2024-07-15 22:26:58.917845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.861 [2024-07-15 22:26:58.917854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.918350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.918379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.918809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.918817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.919215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.919223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.919643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.919650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.920043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.920049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.920456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.920464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.920861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.920868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.921186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.921194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.921510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.921517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.921844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.921852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.922055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.922066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.922486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.922495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.922901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.922908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.923336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.923344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.923750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.923757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.924171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.924178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 22:26:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2963140 00:29:33.862 [2024-07-15 22:26:58.924587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.924598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 22:26:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2963140 00:29:33.862 [2024-07-15 22:26:58.924907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.924916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 22:26:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2963140 ']' 00:29:33.862 22:26:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:33.862 [2024-07-15 22:26:58.925332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.925341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 22:26:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.862 22:26:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:33.862 [2024-07-15 22:26:58.925748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.925756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 22:26:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.862 [2024-07-15 22:26:58.926163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.926172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 22:26:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:33.862 22:26:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:33.862 [2024-07-15 22:26:58.926495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.926504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.862 [2024-07-15 22:26:58.926943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.862 [2024-07-15 22:26:58.926951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.862 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.927162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.927169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.927544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.927552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.927982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.927989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.928417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.928425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.928834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.928841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.929252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.929262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.929684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.929691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.929989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.929995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.930408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.930415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.930803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.930809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.931368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.931395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.931857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.931865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.932359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.932386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.932704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.932713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.933156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.933164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.933504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.933511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.933909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.933915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.934311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.934318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.934705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.934718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.935141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.935148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.935584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.935590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.936064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.936070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.936518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.936525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.936914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.936920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.937318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.937326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.937748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.937755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.938163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.938169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.863 [2024-07-15 22:26:58.938442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.863 [2024-07-15 22:26:58.938448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.863 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.938877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.938884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.939292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.939299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.939712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.939718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.940105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.940112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.940317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.940328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.940755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.940762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.941188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.941195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.941611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.941618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.941886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.941893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.942390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.942397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.942587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.942595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.942961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.942967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.943358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.943365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.943686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.943692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.944127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.944134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.944540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.944547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.944962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.944968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.945459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.945492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.945698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.945708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.946003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.946010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.946320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.946328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.946720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.946727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.947142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.947150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.947531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.947538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.947844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.947851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.948023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.948029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.948446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.864 [2024-07-15 22:26:58.948453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.864 qpair failed and we were unable to recover it. 00:29:33.864 [2024-07-15 22:26:58.948893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.948900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.949284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.949291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.949682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.949690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.950129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.950135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.950532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.950539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.950943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.950949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.951384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.951411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.951831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.951840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.952135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.952143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.952545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.952551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.952960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.952968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.953398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.953406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.953721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.953728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.954191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.954199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.954633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.954641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.954944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.954952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.955228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.955235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.955551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.955559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.955955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.955963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.956355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.865 [2024-07-15 22:26:58.956363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.865 qpair failed and we were unable to recover it. 00:29:33.865 [2024-07-15 22:26:58.956766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.956774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.957209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.957217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.957602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.957611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.958035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.958042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.958424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.958432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.958843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.958850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.959127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.959135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.959558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.959565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.959955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.959962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.960460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.960488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.960895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.960908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.961356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.961384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.961790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.961798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.962084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.962091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.962512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.962519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.962914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.962920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.963425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.963452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.963864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.963872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.964387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.964414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.964863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.964871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.965334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.965362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.965773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.965781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.966047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.966054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.966308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.966315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.966711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.966718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.967130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.967138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.967550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.967557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.866 [2024-07-15 22:26:58.967970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.866 [2024-07-15 22:26:58.967976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.866 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.968489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.968517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.968934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.968943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.969435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.969462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.969902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.969910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.970288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.970315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.970757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.970765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.971322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.971350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.971819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.971828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.972108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.972116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.972455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.972462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.972859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.972867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.973323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.973350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.973788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.973796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.974197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.974204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.974624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.974630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.975013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.975020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.975445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.975452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.975869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.975876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.976279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.976286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.976761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.976768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.977168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.977153] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:29:33.867 [2024-07-15 22:26:58.977175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.977197] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.867 [2024-07-15 22:26:58.977619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.977629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.978052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.978058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.978462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.978469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.978778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.978784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.979192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.979200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.979625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.979632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.980062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.980069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.980473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.980481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.980894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.980901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.981310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.981317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.981594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.981602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.982027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.867 [2024-07-15 22:26:58.982034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.867 qpair failed and we were unable to recover it. 00:29:33.867 [2024-07-15 22:26:58.982453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.982460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.982871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.982879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.983154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.983161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.983589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.983596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.983914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.983921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.984335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.984342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.984772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.984779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.985211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.985218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.985628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.985636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.985993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.986001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.986429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.986437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.986866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.986874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.987391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.987419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.987847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.987856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.988388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.988416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.988854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.988863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.989369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.989397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.989810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.989819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.990186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.990196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.990636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.990643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.991059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.991067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.991479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.991487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.991918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.991925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.992427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.992454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.992873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.992883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.993401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.993429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.993864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.993872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.994392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.994419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.994824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.994835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.995225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.995232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.995682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.995688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.996091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.996097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.996483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.996490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.996888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.996896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.997281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.997309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.997728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.997737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.998145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.868 [2024-07-15 22:26:58.998153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.868 qpair failed and we were unable to recover it. 00:29:33.868 [2024-07-15 22:26:58.998560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:58.998566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:58.998954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:58.998960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:58.999343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:58.999350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:58.999740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:58.999746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.000155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.000162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.000552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.000558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.000963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.000970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.001162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.001173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.001545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.001552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.001946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.001953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.002468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.002496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.002916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.002925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.003428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.003455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.003855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.003863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.004415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.004443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.004651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.004660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.005099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.005106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.005503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.005510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.005904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.005911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.006418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.006445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.006849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.006857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.007251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.007258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.007676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.007684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.008169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.008176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.869 [2024-07-15 22:26:59.008478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.008486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.008706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.008714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.009091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.009099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.009527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.009535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.009949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.009956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.010391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.010398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.010703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.010711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.010985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.010992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.011197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.011204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.011595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.011601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.012178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.012185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.869 [2024-07-15 22:26:59.012604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.869 [2024-07-15 22:26:59.012610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.869 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.013002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.013008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.013486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.013493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.013906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.013912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.014302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.014309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.014716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.014723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.015158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.015166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.015590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.015596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.016023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.016030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.016453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.016461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.016809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.016816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.017249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.017256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.017655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.017661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.018064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.018072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.018519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.018525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.018987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.018994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.019385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.019411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.019829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.019838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.020283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.020290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.020709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.020716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.021128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.021135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.870 [2024-07-15 22:26:59.021525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.870 [2024-07-15 22:26:59.021532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.870 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.021925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.021931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.022135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.022145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.022566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.022573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.022960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.022966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.023454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.023482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.023884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.023893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.024399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.024427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.024826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.024835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.025422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.025450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.025895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.025904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.026434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.026462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.026873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.026881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.027373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.027400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.027803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.027812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.028332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.028360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.028765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.028774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.029172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.029179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.029568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.029575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.029990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.029997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.030293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.030301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.030688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.030695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.031082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.031090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.031479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.031486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.031893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.031901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.032406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.032433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.032877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.032886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.033370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.033397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.871 qpair failed and we were unable to recover it. 00:29:33.871 [2024-07-15 22:26:59.033728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.871 [2024-07-15 22:26:59.033739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.034203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.034211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.034638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.034645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.035052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.035059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.035487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.035494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.035883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.035891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.036297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.036305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.036729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.036736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.037154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.037162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.037544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.037552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.037939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.037946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.038331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.038339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.038779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.038786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.039172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.039179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.039565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.039572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.039977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.039984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.040411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.040418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.040844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.040850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.041322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.041350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.041666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.041675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.042002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.042008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.042394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.042402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.042850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.042856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.043253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.043260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.043682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.043689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.044102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.044109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.044500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.044507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.872 [2024-07-15 22:26:59.044896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.872 [2024-07-15 22:26:59.044902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.872 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.045382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.045410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.045732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.045741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.046173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.046181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.046496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.046504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.046916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.046923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.047240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.047248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.047636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.047643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.048030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.048036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.048432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.048439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.048837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.048844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.049229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.049236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.049666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.049674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.050097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.050105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.050518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.050527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.050950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.050958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.051458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.051486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.051891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.051899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.052109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.052118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.052564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.052572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.052963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.052970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.053493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.053520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.053931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.053940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.054463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.054490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.054898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.054907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.055425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.055453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.055854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.055862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.056356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.056383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.056787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.056795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.873 [2024-07-15 22:26:59.057211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 22:26:59.057218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.873 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.057646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.057653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.057850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.057859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.058185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.058192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.058602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.058610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.059016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.059023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.059439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.059446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.059709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.059717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.060198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.060206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.060317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:33.874 [2024-07-15 22:26:59.060576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.060584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.061016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.061024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.061455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.061462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.061655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.061663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.062035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.062041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.062447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.062454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.062881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.062889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.063297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.063304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.063708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.063714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.064071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.064078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.064489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.064496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.064904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.064912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.065368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.065375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.065763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.065769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.066058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.066066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.066442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.066451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.066878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.066885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.067454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.067482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.067873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.067889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.068394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.068422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.068882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.068891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.069406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.069433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.874 [2024-07-15 22:26:59.069918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 22:26:59.069926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.874 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.070416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.070444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.070758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.070767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.071163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.071170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.071440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.071447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.071765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.071772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.072161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.072168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.072590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.072596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.072992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.072998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.073447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.073454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.073659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.073668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.073984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.073991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.074392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.074400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.074808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.074815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.075215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.075223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.075662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.075670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.075973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.075980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.076371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.076379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.076766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.076772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.077164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.077171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.077598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.077605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.078001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.078008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.078323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.078330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.078511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.078519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.078884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.078890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.079199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.079206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.079620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.079626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.080054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.080061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.080377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.080384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.080876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.080882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.081193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.081200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.081598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.081605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.081902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.081909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.875 qpair failed and we were unable to recover it. 00:29:33.875 [2024-07-15 22:26:59.082328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 22:26:59.082337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.082730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.082736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.083135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.083142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.083460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.083467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.083916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.083922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.084180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.084187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.084431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.084437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.084873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.084880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.085262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.085269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.085682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.085688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.086086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.086092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.086369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.086375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.086550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.086558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.086759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.086765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.087194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.087202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.087626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.087632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.088024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.088030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.088435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.088442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.088693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.088700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.089111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.089117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.089528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.089534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.089923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.089930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.090358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.090365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.090749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.090755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.876 [2024-07-15 22:26:59.091162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.876 [2024-07-15 22:26:59.091169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.876 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.091583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.091589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.091854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.091861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.092070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.092079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.092509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.092517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.092927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.092934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.093455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.093462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.093875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.093882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.094314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.094342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.094749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.094758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.095240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.095248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.095534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.095540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.095968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.095975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.096248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.096256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.096666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.096672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.097058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.097065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.097439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.097449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.097859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.097866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.098170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.098177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.098581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.098588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.099071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.099077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.099468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.099474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.099555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.099564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.099976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.099982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.100393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.100400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.100781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.100795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.101206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.101213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.101637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.101643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.101854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.101860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.102278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.102285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.102708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.102715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.103179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.103187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.103594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.877 [2024-07-15 22:26:59.103601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.877 qpair failed and we were unable to recover it. 00:29:33.877 [2024-07-15 22:26:59.104012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.104019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.104482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.104489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.104914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.104921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.105310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.105317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.105743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.105750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.106162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.106169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.106609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.106615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.107000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.107007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.107477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.107483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.107869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.107875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.108311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.108318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.108712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.108719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.109128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.109135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.109524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.109531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.109962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.109968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.110362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.110389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.110811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.110820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.111334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.111373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.111770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.111779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.111876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.111885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.112374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.112381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.112854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.112861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.113266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.113272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.113680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.113690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.114098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.114105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.114514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.114521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.114908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.114914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.115348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.115376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.115631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.115639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.116055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.116062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.116350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.116358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.116804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.116811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.878 qpair failed and we were unable to recover it. 00:29:33.878 [2024-07-15 22:26:59.117231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.878 [2024-07-15 22:26:59.117238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.117466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.117476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.117780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.117787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.118226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.118233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.118632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.118638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.118951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.118958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.119269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.119277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.119636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.119643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.120049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.120055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.120367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.120374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.120782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.120789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.121217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.121223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.121608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.121615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.122028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.122035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.122438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.122445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.122834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.122841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.123243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.123249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.123662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.123668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.124086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.124092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.124501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.124509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.124841] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.879 [2024-07-15 22:26:59.124865] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.879 [2024-07-15 22:26:59.124873] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.879 [2024-07-15 22:26:59.124879] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.879 [2024-07-15 22:26:59.124884] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.879 [2024-07-15 22:26:59.124911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.124918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.125023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:33.879 [2024-07-15 22:26:59.125166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:33.879 [2024-07-15 22:26:59.125349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.125357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.125468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:33.879 [2024-07-15 22:26:59.125469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:33.879 [2024-07-15 22:26:59.125781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.125788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.126316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.126343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.126620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.126629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.879 qpair failed and we were unable to recover it. 00:29:33.879 [2024-07-15 22:26:59.127041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.879 [2024-07-15 22:26:59.127048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.127462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.127469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.127908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.127915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.128310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.128317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.128740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.128747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.129190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.129197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.129610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.129617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.130057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.130064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.130330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.130339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.130749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.130756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.131147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.131154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.131609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.131616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.131912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.131919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.132325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.132332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.132725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.132731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.132935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.132945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.133360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.133370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.133569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.133577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.134000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.134007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.134310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.134319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.134726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.134733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.135120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.135134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.135555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.135562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.135882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.135889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.136301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.136308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.136779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.136785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.137195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.137203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.137619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.137626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.138060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.138067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.138464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.138471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.138860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.880 [2024-07-15 22:26:59.138867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.880 qpair failed and we were unable to recover it. 00:29:33.880 [2024-07-15 22:26:59.139255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.139262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.139553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.139560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.139853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.139860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.140250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.140258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.140473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.140482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.140804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.140811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.141217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.141224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.141666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.141673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.142159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.142166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.142454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.142460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.142885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.142892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.143102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.143110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.143551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.143560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.143975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.143982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.144374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.144381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.144776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.144784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.145073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.145079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.145530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.145537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.145791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.145799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.146229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.146236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.146637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.146644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.147050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.147057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.147452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.147459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.147849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.147857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.148243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.148250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.148642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.148650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.148967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.148974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.149309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.149316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.149522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.149529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.149905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.149913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.150187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.150195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.150598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.881 [2024-07-15 22:26:59.150606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.881 qpair failed and we were unable to recover it. 00:29:33.881 [2024-07-15 22:26:59.151000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.151007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.151480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.151487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.151793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.151800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.152011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.152019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.152434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.152441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.152769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.152775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.153048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.153054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.153452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.153458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.153922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.153929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.154331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.154338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.154759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.154765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.155080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.155087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.155549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.155556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.155944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.155951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.156364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.156393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.156671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.156680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.156914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.156921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.157179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.157186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.157565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.157572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.157986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.157993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.158423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.158432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.158830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.158836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.159232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.159239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.159674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.159680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.160085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.160092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.160495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.160502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.160933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.160939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.161345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.882 [2024-07-15 22:26:59.161372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.882 qpair failed and we were unable to recover it. 00:29:33.882 [2024-07-15 22:26:59.161780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.161788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.162056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.162062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.162490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.162497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.162898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.162904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.163353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.163381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.163789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.163797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.164196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.164204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.164626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.164634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.164908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.164914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.165116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.165131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.165567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.165573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.166019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.166026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.166449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.166456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.166864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.166870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.167367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.167395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.167576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.167585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.167993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.168000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.168460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.168468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.168735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.168742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.169188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.169195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.169620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.169627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.169934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.169940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.170276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.170283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.170522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.170529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.171050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.171057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.171474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.171481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.171829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.171836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.172238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.172244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.172670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.172677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-07-15 22:26:59.172858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.883 [2024-07-15 22:26:59.172864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:33.883 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.173190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.173198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.173504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.173510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.173723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.173731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.173929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.173939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.174044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.174050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.174493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.174499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.174896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.174902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.175227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.175234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.175514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.175520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.175927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.175934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.176361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.176368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.176683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.176690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.177106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.177113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.177512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.177519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.177823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.177830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.178247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.178254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.178577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.178583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.178806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.178813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.179239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.179246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.179739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.179745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.180025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.180031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.180465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.180471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.180870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.180876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.181271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.181278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.181701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.181707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.181980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.181986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.182221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.157 [2024-07-15 22:26:59.182227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.157 qpair failed and we were unable to recover it. 00:29:34.157 [2024-07-15 22:26:59.182629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.182635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.183025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.183032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.183332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.183339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.183727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.183734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.184027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.184034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.184455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.184461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.184852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.184859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.185271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.185277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.185688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.185694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.185918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.185925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.186311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.186318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.186630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.186637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.186976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.186982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.187451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.187457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.187854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.187861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.188394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.188425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.188822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.188830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.188944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.188951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.189275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.189282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.189466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.189472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.189687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.189693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.190126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.190133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.190556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.190562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.190998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.191005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.191427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.191434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.191842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.191849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.192351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.192379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.192822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.192831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.193361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.193389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.193800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.193808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.194016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.194023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.194328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.194336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.194760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.194767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.195167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.195174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.195568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.195574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.195996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.196002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.196437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.196443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.196940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.196947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.197459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.197487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.197715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.197724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.158 [2024-07-15 22:26:59.197994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.158 [2024-07-15 22:26:59.198003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.158 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.198283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.198290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.198749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.198755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.199153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.199160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.199399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.199405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.199817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.199824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.200093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.200099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.200359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.200366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.200786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.200793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.201025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.201031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.201362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.201369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.201824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.201831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.202146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.202154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.202485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.202492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.202888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.202895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.203338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.203346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.203758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.203765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.203990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.203996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.204226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.204232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.204686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.204692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.204963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.204969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.205406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.205413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.205647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.205654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.206095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.206103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.206509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.206516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.206752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.206759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.206986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.206993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.207220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.207227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.207486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.207493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.207849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.207856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.208269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.208276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.208694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.208701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.208924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.208930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.209339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.209345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.209876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.209882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.210291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.210297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.210697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.210703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.210993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.211000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.211390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.211397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.211670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.211676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.212091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.159 [2024-07-15 22:26:59.212097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.159 qpair failed and we were unable to recover it. 00:29:34.159 [2024-07-15 22:26:59.212424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.212431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.212858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.212865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.213344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.213371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.213870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.213878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.214380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.214407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.214814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.214823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.215229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.215236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.215666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.215672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.216110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.216117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.216535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.216542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.216721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.216730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.217044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.217051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.217444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.217452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.217883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.217890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.218329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.218339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.218550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.218556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.218975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.218981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.219389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.219396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.219829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.219835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.220345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.220373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.220587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.220595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.220890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.220897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.221308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.221315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.221749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.221756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.222178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.222185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.222395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.222401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.222782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.222788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.223199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.223205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.223508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.223515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.223848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.223854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.224246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.224252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.224682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.224688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.225080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.225086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.225457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.225465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.225730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.225737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.226226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.226234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.226541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.226548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.226951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.226957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.227292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.227299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.227731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.227737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.160 qpair failed and we were unable to recover it. 00:29:34.160 [2024-07-15 22:26:59.228128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.160 [2024-07-15 22:26:59.228135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.228394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.228400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.228628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.228634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.228834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.228840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.229137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.229144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.229537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.229543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.229943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.229949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.230341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.230347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.230767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.230774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.230966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.230973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.231378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.231384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.231772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.231779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.232022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.232029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.232242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.232249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.232492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.232501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.232721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.232728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.233098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.233105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.233386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.233393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.233835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.233842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.234234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.234240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.234675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.234682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.235067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.235073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.235481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.235487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.235826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.235832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.236117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.236128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.236537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.236543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.236944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.236951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.237440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.237468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.237876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.237885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.238113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.238119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.238539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.238546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.238944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.238951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.239460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.161 [2024-07-15 22:26:59.239488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.161 qpair failed and we were unable to recover it. 00:29:34.161 [2024-07-15 22:26:59.239897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.239906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.240493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.240520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.240932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.240940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.241351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.241378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.241654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.241662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.242098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.242105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.242502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.242509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.242903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.242910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.243369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.243397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.243608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.243618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.244008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.244016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.244344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.244352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.244566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.244572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.244971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.244978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.245400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.245407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.245795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.245801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.246188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.246195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.246627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.246634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.247025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.247033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.247461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.247468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.247902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.247909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.248335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.248345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.248539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.248547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.248815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.248822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.249240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.249248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.249677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.249683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.249948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.249955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.250372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.250379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.250585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.250591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.251008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.251014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.251445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.251451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.251774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.251781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.252191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.252198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.252590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.252596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.253018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.253025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.253225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.253232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.253484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.253490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.253891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.253897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.254208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.254215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.254624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-15 22:26:59.254631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.162 qpair failed and we were unable to recover it. 00:29:34.162 [2024-07-15 22:26:59.255027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.255033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.255141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.255149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.255490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.255497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.255899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.255907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.256183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.256191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.256608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.256616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.256887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.256893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.257279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.257286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.257513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.257520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.257964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.257970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.258193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.258199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.258598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.258605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.259004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.259011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.259439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.259446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.259837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.259843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.260266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.260272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.260482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.260488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.260937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.260945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.261345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.261351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.261621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.261627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.261856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.261862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.262282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.262291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.262727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.262733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.262958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.262964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.263378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.263385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.263715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.263721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.264151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.264158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.264572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.264579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.264966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.264972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.265359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.265366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.265679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.265685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.266139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.266146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.266613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.266620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.267009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.267015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.267484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.267491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.267965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.267972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.268412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.268420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.268830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.268837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.269110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.269118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.269556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-15 22:26:59.269563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.163 qpair failed and we were unable to recover it. 00:29:34.163 [2024-07-15 22:26:59.269778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.269786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.270073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.270080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.270512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.270520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.270793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.270801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.271211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.271217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.271617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.271624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.271837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.271843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.272229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.272236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.272659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.272665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.273062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.273069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.273476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.273483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.273907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.273914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.274119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.274131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.274537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.274544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.274939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.274945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.275462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.275489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.275899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.275908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.276338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.276366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.276600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.276608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.276839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.276847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.277164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.277171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.277560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.277570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.278018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.278025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.278439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.278445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.278834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.278841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.279229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.279237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.279649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.279657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.280067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.280073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.280251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.280258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.280700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.280707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.280929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.280935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.281319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.281326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.281746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.281752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.282138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.282145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.282545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.282552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.282940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.282946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.283338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.283345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.283733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.283740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.284049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.284057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.284279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.284285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.164 [2024-07-15 22:26:59.284713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-15 22:26:59.284722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.164 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.284792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.284799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.285246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.285254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.285469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.285476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.285780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.285786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.285989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.285995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.286394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.286401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.286845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.286851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.287118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.287128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.287329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.287337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.287730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.287737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.288013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.288019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.288518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.288525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.288919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.288927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.289337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.289344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.289757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.289764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.289959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.289965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.290207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.290214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.290615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.290622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.291086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.291093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.291497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.291503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.291905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.291914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.292471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.292499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.292717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.292725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.293129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.293136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.293562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.293569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.293800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.293807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.294270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.294277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.294641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.294647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.295078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.295085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.295503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.295511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.295956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.295963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.296478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.296506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.296989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.296998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.297106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.297113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.297412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.297419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.165 qpair failed and we were unable to recover it. 00:29:34.165 [2024-07-15 22:26:59.297862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.165 [2024-07-15 22:26:59.297868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.298395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.298422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.298833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.298841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.299367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.299394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.299871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.299879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.300107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.300114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.300542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.300549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.300952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.300959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.301497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.301525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.301818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.301827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.302374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.302401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.302891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.302899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.303466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.303494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.303906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.303915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.304435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.304462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.304679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.304687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.305091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.305097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.305522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.305530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.305936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.305942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.306453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.306481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.306887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.306895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.307407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.307434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.307843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.307851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.308370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.308398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.308793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.308802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.309017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.309027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.309337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.309346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.309556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.309565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.309967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.309974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.310364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.310372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.310674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.310681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.311089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.311097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.311504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.311511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.311914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.311921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.312340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.312347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.312742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.312748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.313015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.313022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.313329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.313337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.313741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.313748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.166 qpair failed and we were unable to recover it. 00:29:34.166 [2024-07-15 22:26:59.314140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.166 [2024-07-15 22:26:59.314148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.314438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.314445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.314874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.314882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.315284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.315291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.315521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.315528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.315944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.315951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.316359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.316366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.316784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.316791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.317190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.317196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.317597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.317604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.318014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.318021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.318438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.318445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.318841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.318848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.319242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.319255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.319670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.319677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.319872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.319878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.320254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.320260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.320680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.320686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.320907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.320914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.321320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.321328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.321598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.321604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.321835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.321843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.322247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.322254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.322478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.322485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.322700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.322707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.323110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.323118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.323443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.323452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.323682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.323689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.324100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.324107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.324441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.324450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.324860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.324867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.325172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.325179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.325471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.325478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.325870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.325877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.326143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.326149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.326537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.326543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.326766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.326775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.327076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.327082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.327484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.167 [2024-07-15 22:26:59.327491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.167 qpair failed and we were unable to recover it. 00:29:34.167 [2024-07-15 22:26:59.327923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.327930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.328344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.328351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.328785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.328793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.329231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.329238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.329646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.329653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.330056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.330063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.330341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.330349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.330561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.330567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.330952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.330959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.331340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.331347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.331775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.331783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.331884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.331891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.332289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.332297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.332721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.332729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.333146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.333154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.333443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.333449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.333651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.333658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.334043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.334049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.334437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.334444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.334853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.334859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.335259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.335265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.335512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.335518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.335945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.335951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.336352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.336359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.336805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.336811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.337009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.337015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.337349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.337357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.337615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.337622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.338035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.338042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.338425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.338432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.338712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.338718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.339134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.339140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.339549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.339555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.339782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.339789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.340002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.340009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.340395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.340402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.340844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.340851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.341266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.341273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.341686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.168 [2024-07-15 22:26:59.341692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.168 qpair failed and we were unable to recover it. 00:29:34.168 [2024-07-15 22:26:59.342106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.342112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.342489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.342496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.342697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.342706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.343107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.343114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.343526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.343533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.343918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.343924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.344357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.344363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.344680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.344687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.345130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.345138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.345521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.345528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.345737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.345744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.346177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.346184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.346397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.346404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.346833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.346839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.347242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.347249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.347663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.347672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.348107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.348113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.348536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.348543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.348739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.348745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.349089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.349096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.349517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.349524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.349966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.349973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.350377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.350383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.350807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.350814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.351326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.351353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.351826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.351834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.352360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.352388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.352600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.352608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.352819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.352826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.353223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.353231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.353654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.353662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.354083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.354089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.354570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.354576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.354984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.354991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.355500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.355528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.355961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.355969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.356497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.356524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.357016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.357024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.169 [2024-07-15 22:26:59.357500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.169 [2024-07-15 22:26:59.357508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.169 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.357711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.357719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.358151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.358159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.358561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.358567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.358967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.358974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.359382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.359389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.359792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.359798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.360203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.360210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.360604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.360610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.360913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.360920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.361194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.361201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.361575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.361582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.361997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.362003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.362397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.362405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.362842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.362849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.363394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.363422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.363653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.363661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.364079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.364089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.364498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.364506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.364897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.364905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.365411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.365419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.365641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.365648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.365966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.365972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.366368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.366374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.366804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.366810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.367041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.367047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.367469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.367476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.367711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.367718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.367944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.367954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.368329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.170 [2024-07-15 22:26:59.368336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.170 qpair failed and we were unable to recover it. 00:29:34.170 [2024-07-15 22:26:59.368731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.368737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.369132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.369139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.369345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.369351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.369785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.369791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.370013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.370020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.370143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.370150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.370531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.370537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.370929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.370935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.371356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.371363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.371585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.371591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.372001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.372007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.372436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.372443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.372647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.372653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.373059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.373067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.373144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.373150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.373565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.373572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.373837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.373844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.374275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.374282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.374766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.374773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.375179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.375185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.375623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.375629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.375896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.375902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.376335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.376342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.376729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.376735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.377170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.377176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.377566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.377574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.378011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.378018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.378332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.378341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.378738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.378745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.379139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.379145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.379597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.379604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.379993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.379999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.380392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.380399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.380786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.380793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.381190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.381197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.381374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.381381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.381785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.381793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.382000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.382010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.171 [2024-07-15 22:26:59.382395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.171 [2024-07-15 22:26:59.382402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.171 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.382708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.382715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.382915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.382923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.383067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.383073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.383459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.383466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.383852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.383859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.384248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.384255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.384660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.384666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.384927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.384934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.385168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.385176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.385438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.385445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.385642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.385649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.385973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.385979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.386272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.386278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.386687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.386694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.387031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.387038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.387454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.387461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.387855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.387861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.388249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.388256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.388693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.388699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.389098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.389104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.389543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.389550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.389821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.389827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.390092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.390099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.390302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.390309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.390733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.390740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.391134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.391142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.391332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.391340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.391512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.391520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.391898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.391907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.392343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.392350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.392747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.392753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.392824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.392830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.393212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.393220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.393634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.393640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.394053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.394059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.394460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.394468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.394777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.394784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.395188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.172 [2024-07-15 22:26:59.395194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.172 qpair failed and we were unable to recover it. 00:29:34.172 [2024-07-15 22:26:59.395582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.395588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.395887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.395893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.396213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.396220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.396638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.396644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.397036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.397043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.397448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.397455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.397666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.397673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.398087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.398094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.398497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.398504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.398708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.398714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.399134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.399143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.399433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.399441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.399859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.399865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.400285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.400293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.400726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.400732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.401121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.401135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.401547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.401553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.401949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.401956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.402350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.402356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.402827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.402833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.403327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.403355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.403794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.403802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.404252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.404260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.404449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.404456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.404861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.404868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.405301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.405309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.405580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.405587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.405996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.406002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.406394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.406401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.406470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.406476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.406846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.406857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.407293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.407300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.407768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.407774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.408162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.408168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.408588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.408595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.408937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.408943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.409372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.409379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.409847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.409854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.173 qpair failed and we were unable to recover it. 00:29:34.173 [2024-07-15 22:26:59.410290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.173 [2024-07-15 22:26:59.410297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.410737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.410744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.411152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.411159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.411563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.411570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.411962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.411969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.412373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.412379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.412783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.412790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.413103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.413110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.413496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.413503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.413893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.413899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.414426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.414453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.414950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.414959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.415468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.415495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.415932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.415940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.416450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.416478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.416754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.416762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.416996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.417003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.417497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.417525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.418008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.418017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.418312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.418320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.418767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.418774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.419076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.419084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.419531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.419538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.419958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.419966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.420472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.420500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.420976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.420985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.421497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.421525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.421946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.421955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.422472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.422500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.422777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.422786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.423301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.423335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.423487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.423496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.423936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.423947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.424342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.424350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.424747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.424754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.425215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.425223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.425649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.425655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.174 [2024-07-15 22:26:59.425929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.174 [2024-07-15 22:26:59.425936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.174 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.426354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.426361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.426738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.426745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.427127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.427134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.427333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.427339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.427537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.427546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.427967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.427974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.428370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.428377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.428778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.428784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.429188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.429196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.429581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.429588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.429864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.429871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.430302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.430310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.430738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.430744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.431133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.431140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.431550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.431557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.431834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.431841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.432127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.432134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.432565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.432572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.432836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.432842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.433053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.433060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.433451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.433458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.433872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.433878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.434267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.434274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.434697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.434703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.435098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.435104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.435519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.435526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.435754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.435761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.436062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.436068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.436290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.436300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.436692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.436699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.436930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.436937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.175 qpair failed and we were unable to recover it. 00:29:34.175 [2024-07-15 22:26:59.437347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.175 [2024-07-15 22:26:59.437354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.437770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.437777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.438234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.438241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.438665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.438674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.439079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.439085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.439476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.439484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.439705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.439712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.440099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.440106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.440503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.440509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.440733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.440740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.441182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.441189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.441579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.441586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.441985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.441992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.442426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.442433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.442741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.442748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.443162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.443169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.443569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.443576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.443995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.444003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.444436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.444442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.444834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.444841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.445331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.445359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.445828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.445837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.446244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.446252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.446702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.446709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.447111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.447117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.447197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.447207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.447495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.447502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.447921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.447927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.448431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.448459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.448949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.448957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.449455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.449482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.449892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.449900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.450078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.450087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.450329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.450337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.450749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.176 [2024-07-15 22:26:59.450756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.176 qpair failed and we were unable to recover it. 00:29:34.176 [2024-07-15 22:26:59.450967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.450973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.451282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.451289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.451695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.451701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.452113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.452120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.452550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.452556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.452952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.452960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.453530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.453558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.453797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.453805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.454180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.454191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.454645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.454652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.455094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.455101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.455397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.455404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.455701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.455707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.456115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.456126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.456649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.456656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.457076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.457082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.457572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.457599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.458001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.458009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.458228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.458235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.458510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.458518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.458802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.458809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.459113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.459121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.459256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.459263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.459684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.459690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.460011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.460018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.460453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.460461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.460894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.460901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.461350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.461356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.461755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.461761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.462169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.462176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.462607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.462613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.463005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.463012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.463418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.463426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.463855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.463862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.464274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.464282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.464565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.464573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.177 [2024-07-15 22:26:59.464985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.177 [2024-07-15 22:26:59.464993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.177 qpair failed and we were unable to recover it. 00:29:34.178 [2024-07-15 22:26:59.465395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.178 [2024-07-15 22:26:59.465402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.178 qpair failed and we were unable to recover it. 00:29:34.178 [2024-07-15 22:26:59.465801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.178 [2024-07-15 22:26:59.465807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.178 qpair failed and we were unable to recover it. 00:29:34.178 [2024-07-15 22:26:59.465931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.178 [2024-07-15 22:26:59.465937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.178 qpair failed and we were unable to recover it. 00:29:34.178 [2024-07-15 22:26:59.466295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.178 [2024-07-15 22:26:59.466303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.178 qpair failed and we were unable to recover it. 00:29:34.178 [2024-07-15 22:26:59.466715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.178 [2024-07-15 22:26:59.466722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.178 qpair failed and we were unable to recover it. 00:29:34.178 [2024-07-15 22:26:59.467160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.178 [2024-07-15 22:26:59.467167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.178 qpair failed and we were unable to recover it. 00:29:34.178 [2024-07-15 22:26:59.467615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.178 [2024-07-15 22:26:59.467622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.178 qpair failed and we were unable to recover it. 00:29:34.178 [2024-07-15 22:26:59.468041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.178 [2024-07-15 22:26:59.468047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.178 qpair failed and we were unable to recover it. 00:29:34.178 [2024-07-15 22:26:59.468434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.178 [2024-07-15 22:26:59.468441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.178 qpair failed and we were unable to recover it. 00:29:34.178 [2024-07-15 22:26:59.468550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.178 [2024-07-15 22:26:59.468558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.178 qpair failed and we were unable to recover it. 00:29:34.178 [2024-07-15 22:26:59.468885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.178 [2024-07-15 22:26:59.468892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.178 qpair failed and we were unable to recover it. 00:29:34.178 [2024-07-15 22:26:59.469296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.178 [2024-07-15 22:26:59.469305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.178 qpair failed and we were unable to recover it. 00:29:34.178 [2024-07-15 22:26:59.469495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.178 [2024-07-15 22:26:59.469505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.178 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.469803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.469809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.470216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.470223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.470703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.470711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.470971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.470978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.471413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.471420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.471868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.471874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.472311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.472317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.472587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.472594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.472960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.472966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.473276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.473284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.473693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.473701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.474114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.474120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.474191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.474198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.474657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.474664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.475052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.475059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.475352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.475359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.475765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.475772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.476170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.476177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.476523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.476531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.476942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.476949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.477338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.477345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.477687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.477694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.478104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.478111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.478548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.478554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.478947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.478953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.479366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.479393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.479872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.479881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.480416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.480444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.480784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.480792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.481192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.481200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.481617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.481624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.482023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.482030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.482443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.482450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.482845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.482851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.483272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.483279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.451 qpair failed and we were unable to recover it. 00:29:34.451 [2024-07-15 22:26:59.483704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.451 [2024-07-15 22:26:59.483711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.484017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.484024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.484311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.484318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.484604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.484616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.485029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.485035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.485441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.485448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.485884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.485891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.486376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.486383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.486784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.486791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.487231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.487238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.487646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.487653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.487774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.487781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.488181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.488188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.488590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.488597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.488805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.488811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.489005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.489015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.489394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.489401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.489793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.489800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.490189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.490196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.490519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.490525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.490940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.490947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.491334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.491341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.491828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.491834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.492221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.492227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.492649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.492656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.493089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.493096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.493544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.493551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.493759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.493765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.493959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.493967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.494377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.494384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.494597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.494603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.495027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.495033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.495437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.495444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.495842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.495849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.496297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.496304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.496728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.496735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.497053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.497059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.497472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.497479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.497873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.497879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.498277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.498283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.452 [2024-07-15 22:26:59.498690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.452 [2024-07-15 22:26:59.498696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.452 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.499088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.499095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.499406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.499412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.499666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.499675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.500085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.500092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.500518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.500525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.500914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.500922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.501138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.501148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.501381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.501387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.501613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.501619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.502043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.502049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.502448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.502455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.502843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.502850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.503237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.503244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.503632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.503639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.503899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.503906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.504438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.504445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.504908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.504915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.505309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.505316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.505722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.505730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.505942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.505950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.506368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.506375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.506805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.506812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.507210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.507217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.507616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.507622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.508055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.508062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.508520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.508526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.508726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.508733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.509162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.509169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.509607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.509613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.510008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.510014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.510282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.510288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.510670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.510676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.511070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.511077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.511249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.511256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.511664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.511670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.512059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.512066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.512464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.512470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.512674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.512680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.512941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.512949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.453 [2024-07-15 22:26:59.513161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.453 [2024-07-15 22:26:59.513170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.453 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.513625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.513632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.513809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.513816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.514190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.514199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.514643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.514649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.515084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.515091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.515285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.515292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.515586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.515593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.515985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.515992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.516397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.516403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.516794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.516801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.517188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.517195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.517617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.517625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.518054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.518061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.518136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.518142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.518543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.518549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.518750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.518756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.519169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.519176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.519585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.519591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.519984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.519991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.520195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.520202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.520627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.520633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.521073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.521080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.521477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.521484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.521718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.521726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.522134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.522142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.522578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.522585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.523056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.523064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.523480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.523488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.523714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.523721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.524093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.524101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.524533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.524541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.524854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.524861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.525262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.525272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.525685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.525693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.526130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.526138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.526513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.526521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.526836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.526843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.527119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.527132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.527588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.527596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.527986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.454 [2024-07-15 22:26:59.527994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.454 qpair failed and we were unable to recover it. 00:29:34.454 [2024-07-15 22:26:59.528479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.528506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.528782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.528791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.528948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.528966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.529366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.529373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.529597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.529605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.530021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.530028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.530456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.530463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.530859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.530865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.531137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.531144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.531279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.531295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.531702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.531709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.532116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.532128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.532435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.532443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.532854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.532862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.533265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.533272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.533543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.533550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.534005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.534013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.534500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.534508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.534905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.534911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.535220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.535227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.535635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.535642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.536034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.536041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.536447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.536454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.536858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.536865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.537266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.537273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.537604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.537612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.538047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.538055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.538465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.538472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.538739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.538747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.539180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.539189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.539557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.539565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.455 [2024-07-15 22:26:59.539776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.455 [2024-07-15 22:26:59.539782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.455 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.540286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.540293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.540565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.540571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.540783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.540792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.541112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.541119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.541535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.541541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.541746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.541753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.541959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.541965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.542342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.542349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.542743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.542750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.543143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.543150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.543542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.543548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.543987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.543993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.544394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.544401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.544808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.544815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.545248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.545255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.545669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.545676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.546064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.546071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.546477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.546484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.546748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.546755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.547166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.547173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.547488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.547495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.547910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.547916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.548148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.548154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.548457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.548463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.548863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.548869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.549244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.549251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.549716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.549723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.550133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.550140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.550553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.550560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.550955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.550962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.551326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.551354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.551785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.551793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.552190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.552198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.552575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.552582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.552999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.553006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.553415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.553422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.553809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.553816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.554351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.554382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.554787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.554796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.555190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.456 [2024-07-15 22:26:59.555197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.456 qpair failed and we were unable to recover it. 00:29:34.456 [2024-07-15 22:26:59.555593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.555599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.555994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.556001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.556404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.556412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.556825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.556832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.557039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.557046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.557445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.557452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.557844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.557850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.558055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.558062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.558452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.558460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.558670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.558676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.559058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.559064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.559485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.559492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.559878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.559885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.560335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.560342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.560566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.560572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.560757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.560765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.561188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.561195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.561642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.561649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.562116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.562126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.562519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.562526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.562725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.562735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.562997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.563005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.563421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.563428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.563633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.563639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.564061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.564068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.564533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.564540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.564807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.564813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.565212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.565219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.565443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.565450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.565858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.565865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.566257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.566264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.566575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.566582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.566995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.567001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.567268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.567275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.567674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.567681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.567905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.567911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.568367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.568374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.568677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.568686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.568914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.568920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.569339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.569346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.457 qpair failed and we were unable to recover it. 00:29:34.457 [2024-07-15 22:26:59.569709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.457 [2024-07-15 22:26:59.569716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.570105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.570112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.570503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.570510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.570897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.570903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.571410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.571438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.571842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.571851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.572194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.572201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.572611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.572618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.572999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.573005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.573408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.573415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.573804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.573810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.574074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.574081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.574498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.574505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.574895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.574901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.575433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.575461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.575871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.575879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.576107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.576115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.576532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.576540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.576879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.576887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.577348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.577376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.577803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.577812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.578356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.578384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.578660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.578669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.578899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.578907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.579331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.579339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.579737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.579744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.580144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.580151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.580349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.580359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.580820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.580827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.581216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.581223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.581489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.581496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.581906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.581913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.582335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.582342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.582732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.582739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.583135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.583142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.583536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.583543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.583865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.583872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.584308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.584318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.584727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.584734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.585129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.585136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.458 [2024-07-15 22:26:59.585477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.458 [2024-07-15 22:26:59.585483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.458 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.585896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.585902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.586440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.586468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.586879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.586888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.587102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.587109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.587376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.587384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.587794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.587801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.587996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.588005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.588308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.588316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.588706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.588713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.589110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.589117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.589515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.589522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.589935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.589942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.590260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.590268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.590483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.590492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.590781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.590788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.591197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.591204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.591517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.591524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.591961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.591967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.592273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.592280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.592695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.592701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.593091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.593098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.593303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.593309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.593568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.593575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.593903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.593910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.594313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.594320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.594750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.594757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.595147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.595155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.595584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.595590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.459 qpair failed and we were unable to recover it. 00:29:34.459 [2024-07-15 22:26:59.595981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.459 [2024-07-15 22:26:59.595987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.596377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.596384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.596795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.596803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.597159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.597165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.597561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.597568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.597789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.597796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.598202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.598208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.598614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.598620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.599008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.599017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.599418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.599425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.599642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.599649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.600056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.600062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.600463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.600470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.600863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.600870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.601266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.601273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.601545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.601551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.601787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.601794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.602194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.602202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.602476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.602484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.602716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.602724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.603144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.603151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.603554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.460 [2024-07-15 22:26:59.603561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.460 qpair failed and we were unable to recover it. 00:29:34.460 [2024-07-15 22:26:59.603979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.603986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.604385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.604392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.604606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.604615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.604823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.604830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.605261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.605268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.605666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.605673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.606064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.606071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.606479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.606487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.606707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.606714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.607133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.607139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.607432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.607438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.607838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.607845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.608262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.608269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.608586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.608593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.609017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.609025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.609460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.609468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.609531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.609539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.609989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.609996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.610102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.610109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.610415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.610423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.610857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.610864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.611281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.611288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.611679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.611686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.612086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.612092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.612551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.612557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.612824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.612831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.613291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.613300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.613697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.613703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.614006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.614012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.614235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.614243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.614659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.614665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.615055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.615061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.615465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.615472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.615687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.615694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.615882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.615891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.616361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.616368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.616661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.616668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.616960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.616966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.617250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.617257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.617690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.617696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.461 [2024-07-15 22:26:59.617903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.461 [2024-07-15 22:26:59.617910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.461 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.618317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.618325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.618523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.618529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.618931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.618937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.619374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.619381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.619815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.619822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.620230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.620237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.620515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.620521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.620937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.620944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.621338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.621344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.621754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.621761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.622154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.622161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.622461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.622467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.622872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.622879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.623267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.623275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.623588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.623595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.623990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.623997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.624393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.624400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.624796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.624802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.625279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.625286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.625689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.625695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.625894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.625902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.626212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.626219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.626645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.626652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.627053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.627060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.627449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.627456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.627845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.627853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.628244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.628251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.628661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.628667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.629059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.629065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.629467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.629474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.629684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.629691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.630098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.630105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.630275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.630282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.630711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.630718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.631129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.631136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.631556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.631563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.631963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.631969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.632331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.632359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.632832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.632840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.633365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.462 [2024-07-15 22:26:59.633392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.462 qpair failed and we were unable to recover it. 00:29:34.462 [2024-07-15 22:26:59.633707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.633715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.634189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.634197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.634596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.634603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.634811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.634818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.635259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.635266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.635670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.635677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.635884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.635890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.636102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.636108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.636606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.636614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.637025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.637032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.637436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.637443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.637514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.637520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.637901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.637908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.638117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.638128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.638545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.638553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.638756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.638763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.639072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.639078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.639487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.639494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.639888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.639895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.640284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.640292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.640685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.640691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.641082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.641089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.641493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.641500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.641772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.641779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.642094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.642100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.642521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.642530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.642933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.642941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.643455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.643482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.643968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.643976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.644559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.644586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.644998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.645006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.645227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.645235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.645606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.645613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.646000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.646008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.646433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.646440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.646765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.646772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.647038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.647045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.647464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.647471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.647921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.647928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.648255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.648262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.463 [2024-07-15 22:26:59.648643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.463 [2024-07-15 22:26:59.648649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.463 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.649041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.649048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.649447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.649454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.649675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.649681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.650101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.650107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.650498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.650505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.650777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.650785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.651172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.651180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.651593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.651600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.651991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.651998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.652320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.652326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.652717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.652723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.652928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.652935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.653128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.653138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.653452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.653459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.653858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.653866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.654286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.654293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.654559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.654565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.654955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.654962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.655269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.655276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.655690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.655697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.656134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.656141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.656434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.656441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.656860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.656866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.657258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.657265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.657475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.657484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.657914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.657920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.658311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.658317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.658721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.658728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.659159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.659166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.659585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.659592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.659982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.659989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.660380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.660387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.660592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.660598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.661056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.464 [2024-07-15 22:26:59.661063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.464 qpair failed and we were unable to recover it. 00:29:34.464 [2024-07-15 22:26:59.661336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.661343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.661755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.661762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.662038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.662045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.662267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.662274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.662742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.662748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.663144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.663150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.663613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.663620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.663837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.663844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.664253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.664261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.664382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.664389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.664870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.664876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.665308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.665315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.665627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.665634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.666044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.666051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.666460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.666467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.666849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.666856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.667238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.667245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.667657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.667663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.667929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.667937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.668336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.668343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.668639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.668646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.669036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.669042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.669443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.669450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.669853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.669859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.670247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.670254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.670534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.670540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.670965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.670972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.671408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.671416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.671813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.671819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.672208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.672215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.672643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.672652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.673088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.673095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.673488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.673495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.673926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.673933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.674335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.674363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.674682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.674691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.675158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.675166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.675434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.675441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.675887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.675894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.676292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.465 [2024-07-15 22:26:59.676299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.465 qpair failed and we were unable to recover it. 00:29:34.465 [2024-07-15 22:26:59.676580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.676588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.676981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.676987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.677254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.677262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.677535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.677542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.677957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.677964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.678354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.678361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.678561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.678571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.679003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.679009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.679422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.679430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.679640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.679647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.679911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.679918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.680321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.680328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.680805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.680812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.680874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.680881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.681160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.681168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.681596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.681603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.681997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.682004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.682395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.682403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.682699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.682707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.682918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.682925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.683346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.683354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.683767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.683774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.684095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.684101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.684497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.684503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.684898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.684905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.685297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.685304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.685618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.685626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.686061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.686068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.686474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.686481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.686879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.686885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.687107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.687116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.687419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.687427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.687635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.687644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.687845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.687852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.688148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.688155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.688582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.688589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.688981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.688988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.689413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.689420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.689848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.689855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.690079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.690085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.466 [2024-07-15 22:26:59.690478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.466 [2024-07-15 22:26:59.690485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.466 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.690751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.690758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.691149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.691156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.691574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.691582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.691975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.691982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.692253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.692260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.692620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.692627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.693012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.693020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.693258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.693265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.693674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.693682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.694091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.694097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.694511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.694519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.694734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.694740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.695137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.695144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.695392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.695399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.695820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.695827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.696223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.696230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.696294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.696302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.696751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.696758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.697172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.697179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.697384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.697391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.697805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.697812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.698232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.698240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.698643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.698650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.698948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.698955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.699374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.699381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.699799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.699807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.700201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.700208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.700599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.700606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.700931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.700938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.701363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.701372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.701582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.701589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.701914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.467 [2024-07-15 22:26:59.701920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.467 qpair failed and we were unable to recover it. 00:29:34.467 [2024-07-15 22:26:59.702330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.702338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.702528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.702536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.702886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.702892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.703288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.703294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.703502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.703509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.703927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.703933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.704254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.704261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.704666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.704673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.705077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.705084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.705572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.705579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.705845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.705852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.706312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.706319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.706717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.706725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.707127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.707135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.707541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.707548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.707821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.707827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.708100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.708107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.708509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.708517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.708915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.708922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.709466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.709493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.709825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.709833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.710335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.710363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.710779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.710788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.711104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.711112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.711477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.711485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.711905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.711911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.712401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.712428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.712921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.712929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.713329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.713357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.713817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.713826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.714342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.714369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.714806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.714815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.715316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.715343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.715836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.715844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.716103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.716110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.716407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.716414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.716835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.716842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.717341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.717372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.717774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.717782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.717989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.717995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.468 qpair failed and we were unable to recover it. 00:29:34.468 [2024-07-15 22:26:59.718407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.468 [2024-07-15 22:26:59.718414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.718737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.718744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.719168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.719176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.719437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.719445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.719897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.719903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.720302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.720309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.720517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.720523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.720853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.720859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.721267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.721273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.721603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.721609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.721911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.721918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.722270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.722277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.722563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.722570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.722990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.722997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.723214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.723221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.723416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.723426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.723913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.723920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.724313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.724320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.724613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.724619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.725072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.725078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.725389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.725397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.725604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.725610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.726049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.726056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.726458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.726465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.726537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.726544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.726918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.726925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.727316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.727322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.727721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.727728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.728118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.728130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.728329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.728339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.728728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.728735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.729156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.729162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.729638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.729644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.730084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.730091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.730408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.730416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.730839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.730845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.731239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.731246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.731696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.731703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.732108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.732114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.732509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.732516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.469 [2024-07-15 22:26:59.732974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.469 [2024-07-15 22:26:59.732980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.469 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.733472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.733499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.733942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.733950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.734460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.734488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.734707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.734715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.735132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.735141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.735568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.735574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.735984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.735991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.736505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.736533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.736774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.736782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.737001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.737008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.737208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.737216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.737647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.737654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.737930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.737937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.738260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.738267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.738679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.738686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.739086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.739093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.739495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.739502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.739980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.739986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.740403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.740430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.740644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.740654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.741095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.741102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.741577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.741585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.742003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.742010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.742430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.742440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.742711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.742717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.743118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.743131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.743650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.743656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.743864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.743871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.744404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.744431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.744895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.744904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.745352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.745380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.745662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.745670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.746068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.746075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.746474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.746481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.746884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.746891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.747294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.747301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.747586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.747592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.747866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.747873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.748309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.748316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 [2024-07-15 22:26:59.748716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.470 [2024-07-15 22:26:59.748723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.470 qpair failed and we were unable to recover it. 00:29:34.470 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:34.470 [2024-07-15 22:26:59.749167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.749177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:34.471 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:34.471 [2024-07-15 22:26:59.749611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.749620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:34.471 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.471 [2024-07-15 22:26:59.750040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.750049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.750340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.750347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.750762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.750769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.751166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.751174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.751574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.751581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.751979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.751987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.752392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.752399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.752789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.752798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.753213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.753220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.753638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.753645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.754042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.754050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.754439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.754446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.754836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.754844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.755043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.755052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.755472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.755480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.755874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.755881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.756266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.756273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.756656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.756663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.756867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.756876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.757296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.757305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.757705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.757712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.758096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.758104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.758178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.758186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.758603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.758610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.758978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.758984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.759382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.759389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.759787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.759794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.760185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.760193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.760680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.760687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.761133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.761140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.761545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.761552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.761859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.761866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.762128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.762136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.762457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.762464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.762872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.762879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.471 [2024-07-15 22:26:59.763284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.471 [2024-07-15 22:26:59.763292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.471 qpair failed and we were unable to recover it. 00:29:34.735 [2024-07-15 22:26:59.763704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.763711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.764102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.764108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.764519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.764526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.764780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.764786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.765205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.765213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.765619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.765626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.765967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.765974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.766198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.766206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.766588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.766595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.766960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.766967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.767362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.767369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.767762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.767769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.768159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.768166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.768578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.768586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.768785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.768793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.769102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.769109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.769523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.769530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.769707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.769714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.770167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.770174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.770595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.770601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.770824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.770832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.771008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.771017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.771409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.771416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.771892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.771901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.772305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.772313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.772708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.772715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.772984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.772991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.773413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.773420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.773622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.773630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.774065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.774072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.774298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.774305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.774719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.774725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.775127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.775135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.775430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.775437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.775830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.775838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.776246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.776253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.776660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.776667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.777065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.777072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.777273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.777281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.736 qpair failed and we were unable to recover it. 00:29:34.736 [2024-07-15 22:26:59.777565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.736 [2024-07-15 22:26:59.777571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.777867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.777874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.778290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.778297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.778737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.778744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.779002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.779010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.779467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.779474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.779658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.779665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.780052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.780059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.780516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.780523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.780924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.780931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.781198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.781205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.781413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.781420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.781855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.781862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.782246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.782253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.782668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.782674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.782879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.782886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.783307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.783314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.783715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.783723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.784137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.784146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.784550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.784557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.784993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.785000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.785493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.785521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.785850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.785859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.786389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.786417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.786858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.786870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.787368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.787396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.787888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.787897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.788439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.788466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.737 [2024-07-15 22:26:59.788904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.788914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:34.737 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.737 [2024-07-15 22:26:59.789484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.789512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.737 [2024-07-15 22:26:59.789921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.789931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.790370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.790398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.790736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.790744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.791142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.791149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.791465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.791472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.791890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.791897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.792206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.792213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.792616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.792622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.737 qpair failed and we were unable to recover it. 00:29:34.737 [2024-07-15 22:26:59.792883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.737 [2024-07-15 22:26:59.792890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.793221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.793228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.793637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.793643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.793733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.793739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.794057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.794064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.794458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.794465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.794930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.794937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.795245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.795253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.795649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.795656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.796044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.796051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.796461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.796468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.796904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.796913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.797380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.797387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.797802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.797809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.798128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.798135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.798541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.798547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.798964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.798971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.799455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.799483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.799954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.799962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.800459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.800487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.800925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.800934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.801342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.801349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.801787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.801794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.802350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.802379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.802779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.802788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.803304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.803332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.803811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.803819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.804024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.804031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 Malloc0 00:29:34.738 [2024-07-15 22:26:59.804469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.804476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.804684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.804690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.738 [2024-07-15 22:26:59.805148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.805157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:34.738 [2024-07-15 22:26:59.805603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.805610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.738 [2024-07-15 22:26:59.805881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.805889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.738 [2024-07-15 22:26:59.806295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.806302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.806785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.806792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.806998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.807005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.807413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.807423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.807629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.807636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.738 qpair failed and we were unable to recover it. 00:29:34.738 [2024-07-15 22:26:59.808055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.738 [2024-07-15 22:26:59.808062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.808465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.808473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.808680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.808687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.809112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.809120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.809481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.809489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.809879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.809886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.810163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.810171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.810467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.810473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.810865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.810872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.811272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.811279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.811675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.811681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.811836] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.739 [2024-07-15 22:26:59.812093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.812103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.812370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.812377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.812598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.812605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.812790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.812797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.813248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.813255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.813665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.813672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.814067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.814073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.814377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.814384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.814788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.814794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.815184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.815191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.815536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.815543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.815799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.815806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.816194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.816201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.816486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.816494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.816720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.816727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.817137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.817144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.817449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.817456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.817778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.817786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.818203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.818210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.818635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.818641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.819031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.819038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.819442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.819449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.819840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.819846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.820252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.820259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 [2024-07-15 22:26:59.820447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.820497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.739 [2024-07-15 22:26:59.820909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.820916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.739 [2024-07-15 22:26:59.821339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.821347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.739 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.739 [2024-07-15 22:26:59.821746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.739 [2024-07-15 22:26:59.821754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.739 qpair failed and we were unable to recover it. 00:29:34.740 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.740 [2024-07-15 22:26:59.822150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.822158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.822427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.822433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.822828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.822834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.823226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.823232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.823537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.823543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.823981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.823987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.824379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.824387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.824702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.824709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.825125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.825132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.825331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.825338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.825729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.825735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.826134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.826141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.826529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.826535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.826941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.826947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.827516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.827543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.827953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.827961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.828453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.828482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.828795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.828804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.829329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.829356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.829774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.829782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.830173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.830180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.830557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.830563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.830959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.830966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.831176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.831183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.831449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.831456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.831779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.831786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.832208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.832214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 [2024-07-15 22:26:59.832606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.832613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.740 [2024-07-15 22:26:59.833002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.833010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.740 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.740 [2024-07-15 22:26:59.833427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.740 [2024-07-15 22:26:59.833434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.740 qpair failed and we were unable to recover it. 00:29:34.741 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.741 [2024-07-15 22:26:59.833827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.833834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.741 [2024-07-15 22:26:59.834237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.834245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.834664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.834672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.835110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.835118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.835431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.835439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.835857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.835865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.836345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.836352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.836761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.836768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.837159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.837166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.837568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.837574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.837966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.837972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.838384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.838392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.838626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.838634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.839038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.839046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.839433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.839440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.839613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.839620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.839829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.839836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.840242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.840249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.840624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.840630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.841022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.841029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.841386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.841393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.841657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.841664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.841891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.841897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.842195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.842201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.842427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.842433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.842815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.842822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.843226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.843233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.843670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.843677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.844074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.844081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.844282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.844292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.844611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.844619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.844822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.844829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.741 [2024-07-15 22:26:59.845256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.845264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.845521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.845527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.741 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.741 [2024-07-15 22:26:59.845938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.845945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.741 [2024-07-15 22:26:59.846349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.846356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.846770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.741 [2024-07-15 22:26:59.846776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.741 qpair failed and we were unable to recover it. 00:29:34.741 [2024-07-15 22:26:59.847167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.742 [2024-07-15 22:26:59.847173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.847379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.742 [2024-07-15 22:26:59.847387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.847798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.742 [2024-07-15 22:26:59.847805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.848017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.742 [2024-07-15 22:26:59.848024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.848423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.742 [2024-07-15 22:26:59.848429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.848824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.742 [2024-07-15 22:26:59.848830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.849273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.742 [2024-07-15 22:26:59.849279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.849700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.742 [2024-07-15 22:26:59.849706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.850095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.742 [2024-07-15 22:26:59.850102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.850295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.742 [2024-07-15 22:26:59.850304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.850792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.742 [2024-07-15 22:26:59.850799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.851095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.742 [2024-07-15 22:26:59.851102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.851364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.742 [2024-07-15 22:26:59.851371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.851763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.742 [2024-07-15 22:26:59.851770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.851978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.742 [2024-07-15 22:26:59.851987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6158000b90 with addr=10.0.0.2, port=4420 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.852088] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.742 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.742 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:34.742 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.742 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.742 [2024-07-15 22:26:59.862638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.742 [2024-07-15 22:26:59.862724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.742 [2024-07-15 22:26:59.862738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.742 [2024-07-15 22:26:59.862743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.742 [2024-07-15 22:26:59.862747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.742 [2024-07-15 22:26:59.862763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.742 22:26:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2962456 00:29:34.742 [2024-07-15 22:26:59.872610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.742 [2024-07-15 22:26:59.872695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.742 [2024-07-15 22:26:59.872707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.742 [2024-07-15 22:26:59.872712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.742 [2024-07-15 22:26:59.872716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.742 [2024-07-15 22:26:59.872728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.882547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.742 [2024-07-15 22:26:59.882622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.742 [2024-07-15 22:26:59.882635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.742 [2024-07-15 22:26:59.882640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.742 [2024-07-15 22:26:59.882644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.742 [2024-07-15 22:26:59.882656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.892592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.742 [2024-07-15 22:26:59.892668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.742 [2024-07-15 22:26:59.892680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.742 [2024-07-15 22:26:59.892685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.742 [2024-07-15 22:26:59.892689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.742 [2024-07-15 22:26:59.892701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.902607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.742 [2024-07-15 22:26:59.902685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.742 [2024-07-15 22:26:59.902697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.742 [2024-07-15 22:26:59.902702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.742 [2024-07-15 22:26:59.902706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.742 [2024-07-15 22:26:59.902717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.912614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.742 [2024-07-15 22:26:59.912705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.742 [2024-07-15 22:26:59.912717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.742 [2024-07-15 22:26:59.912722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.742 [2024-07-15 22:26:59.912726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.742 [2024-07-15 22:26:59.912737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.922677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.742 [2024-07-15 22:26:59.922748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.742 [2024-07-15 22:26:59.922761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.742 [2024-07-15 22:26:59.922766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.742 [2024-07-15 22:26:59.922770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.742 [2024-07-15 22:26:59.922781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.742 [2024-07-15 22:26:59.932578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.742 [2024-07-15 22:26:59.932656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.742 [2024-07-15 22:26:59.932668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.742 [2024-07-15 22:26:59.932673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.742 [2024-07-15 22:26:59.932677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.742 [2024-07-15 22:26:59.932688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.742 qpair failed and we were unable to recover it. 00:29:34.743 [2024-07-15 22:26:59.942721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.743 [2024-07-15 22:26:59.942797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.743 [2024-07-15 22:26:59.942809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.743 [2024-07-15 22:26:59.942814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.743 [2024-07-15 22:26:59.942818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.743 [2024-07-15 22:26:59.942828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.743 qpair failed and we were unable to recover it. 00:29:34.743 [2024-07-15 22:26:59.952738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.743 [2024-07-15 22:26:59.952816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.743 [2024-07-15 22:26:59.952835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.743 [2024-07-15 22:26:59.952841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.743 [2024-07-15 22:26:59.952849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.743 [2024-07-15 22:26:59.952864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.743 qpair failed and we were unable to recover it. 00:29:34.743 [2024-07-15 22:26:59.962754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.743 [2024-07-15 22:26:59.962827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.743 [2024-07-15 22:26:59.962840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.743 [2024-07-15 22:26:59.962845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.743 [2024-07-15 22:26:59.962850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.743 [2024-07-15 22:26:59.962861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.743 qpair failed and we were unable to recover it. 00:29:34.743 [2024-07-15 22:26:59.972771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.743 [2024-07-15 22:26:59.972843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.743 [2024-07-15 22:26:59.972855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.743 [2024-07-15 22:26:59.972860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.743 [2024-07-15 22:26:59.972864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.743 [2024-07-15 22:26:59.972875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.743 qpair failed and we were unable to recover it. 00:29:34.743 [2024-07-15 22:26:59.982820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.743 [2024-07-15 22:26:59.982898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.743 [2024-07-15 22:26:59.982917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.743 [2024-07-15 22:26:59.982924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.743 [2024-07-15 22:26:59.982928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.743 [2024-07-15 22:26:59.982942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.743 qpair failed and we were unable to recover it. 00:29:34.743 [2024-07-15 22:26:59.992854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.743 [2024-07-15 22:26:59.992926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.743 [2024-07-15 22:26:59.992945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.743 [2024-07-15 22:26:59.992951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.743 [2024-07-15 22:26:59.992955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.743 [2024-07-15 22:26:59.992969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.743 qpair failed and we were unable to recover it. 00:29:34.743 [2024-07-15 22:27:00.002879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.743 [2024-07-15 22:27:00.002954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.743 [2024-07-15 22:27:00.002969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.743 [2024-07-15 22:27:00.002974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.743 [2024-07-15 22:27:00.002979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.743 [2024-07-15 22:27:00.002992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.743 qpair failed and we were unable to recover it. 00:29:34.743 [2024-07-15 22:27:00.012917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.743 [2024-07-15 22:27:00.013001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.743 [2024-07-15 22:27:00.013020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.743 [2024-07-15 22:27:00.013026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.743 [2024-07-15 22:27:00.013031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.743 [2024-07-15 22:27:00.013045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.743 qpair failed and we were unable to recover it. 00:29:34.743 [2024-07-15 22:27:00.022944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.743 [2024-07-15 22:27:00.023037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.743 [2024-07-15 22:27:00.023112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.743 [2024-07-15 22:27:00.023118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.743 [2024-07-15 22:27:00.023128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.743 [2024-07-15 22:27:00.023142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.743 qpair failed and we were unable to recover it. 00:29:34.743 [2024-07-15 22:27:00.032852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.743 [2024-07-15 22:27:00.032921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.743 [2024-07-15 22:27:00.032934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.743 [2024-07-15 22:27:00.032940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.743 [2024-07-15 22:27:00.032944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.743 [2024-07-15 22:27:00.032955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.743 qpair failed and we were unable to recover it. 00:29:34.743 [2024-07-15 22:27:00.042959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.743 [2024-07-15 22:27:00.043025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.743 [2024-07-15 22:27:00.043037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.743 [2024-07-15 22:27:00.043047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.743 [2024-07-15 22:27:00.043051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.743 [2024-07-15 22:27:00.043063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.743 qpair failed and we were unable to recover it. 00:29:34.743 [2024-07-15 22:27:00.053078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.743 [2024-07-15 22:27:00.053154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.743 [2024-07-15 22:27:00.053166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.743 [2024-07-15 22:27:00.053172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.743 [2024-07-15 22:27:00.053177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:34.743 [2024-07-15 22:27:00.053188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.743 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 22:27:00.063018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.006 [2024-07-15 22:27:00.063095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.006 [2024-07-15 22:27:00.063107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.006 [2024-07-15 22:27:00.063112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.006 [2024-07-15 22:27:00.063116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.006 [2024-07-15 22:27:00.063132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 22:27:00.073032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.006 [2024-07-15 22:27:00.073106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.006 [2024-07-15 22:27:00.073118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.006 [2024-07-15 22:27:00.073127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.006 [2024-07-15 22:27:00.073132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.006 [2024-07-15 22:27:00.073144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 22:27:00.083158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.006 [2024-07-15 22:27:00.083229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.006 [2024-07-15 22:27:00.083241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.006 [2024-07-15 22:27:00.083246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.006 [2024-07-15 22:27:00.083251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.006 [2024-07-15 22:27:00.083261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 22:27:00.093117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.006 [2024-07-15 22:27:00.093193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.006 [2024-07-15 22:27:00.093206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.006 [2024-07-15 22:27:00.093211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.006 [2024-07-15 22:27:00.093216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.006 [2024-07-15 22:27:00.093227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 22:27:00.103151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.006 [2024-07-15 22:27:00.103224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.006 [2024-07-15 22:27:00.103237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.006 [2024-07-15 22:27:00.103242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.006 [2024-07-15 22:27:00.103247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.006 [2024-07-15 22:27:00.103257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 22:27:00.113179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.006 [2024-07-15 22:27:00.113249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.006 [2024-07-15 22:27:00.113262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.006 [2024-07-15 22:27:00.113268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.006 [2024-07-15 22:27:00.113272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.006 [2024-07-15 22:27:00.113284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 22:27:00.123116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.006 [2024-07-15 22:27:00.123213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.006 [2024-07-15 22:27:00.123225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.006 [2024-07-15 22:27:00.123231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.006 [2024-07-15 22:27:00.123235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.006 [2024-07-15 22:27:00.123246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 22:27:00.133355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.007 [2024-07-15 22:27:00.133432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.007 [2024-07-15 22:27:00.133443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.007 [2024-07-15 22:27:00.133451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.007 [2024-07-15 22:27:00.133456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.007 [2024-07-15 22:27:00.133467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 22:27:00.143359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.007 [2024-07-15 22:27:00.143439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.007 [2024-07-15 22:27:00.143450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.007 [2024-07-15 22:27:00.143456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.007 [2024-07-15 22:27:00.143460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.007 [2024-07-15 22:27:00.143471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 22:27:00.153350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.007 [2024-07-15 22:27:00.153437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.007 [2024-07-15 22:27:00.153449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.007 [2024-07-15 22:27:00.153455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.007 [2024-07-15 22:27:00.153459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.007 [2024-07-15 22:27:00.153470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 22:27:00.163395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.007 [2024-07-15 22:27:00.163462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.007 [2024-07-15 22:27:00.163474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.007 [2024-07-15 22:27:00.163480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.007 [2024-07-15 22:27:00.163484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.007 [2024-07-15 22:27:00.163495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 22:27:00.173322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.007 [2024-07-15 22:27:00.173399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.007 [2024-07-15 22:27:00.173411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.007 [2024-07-15 22:27:00.173416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.007 [2024-07-15 22:27:00.173421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.007 [2024-07-15 22:27:00.173432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 22:27:00.183431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.007 [2024-07-15 22:27:00.183504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.007 [2024-07-15 22:27:00.183516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.007 [2024-07-15 22:27:00.183521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.007 [2024-07-15 22:27:00.183526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.007 [2024-07-15 22:27:00.183537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 22:27:00.193385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.007 [2024-07-15 22:27:00.193454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.007 [2024-07-15 22:27:00.193466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.007 [2024-07-15 22:27:00.193471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.007 [2024-07-15 22:27:00.193475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.007 [2024-07-15 22:27:00.193486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 22:27:00.203398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.007 [2024-07-15 22:27:00.203499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.007 [2024-07-15 22:27:00.203511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.007 [2024-07-15 22:27:00.203516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.007 [2024-07-15 22:27:00.203520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.007 [2024-07-15 22:27:00.203531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 22:27:00.213438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.007 [2024-07-15 22:27:00.213509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.007 [2024-07-15 22:27:00.213522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.007 [2024-07-15 22:27:00.213527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.007 [2024-07-15 22:27:00.213531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.007 [2024-07-15 22:27:00.213541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 22:27:00.223494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.007 [2024-07-15 22:27:00.223572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.007 [2024-07-15 22:27:00.223586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.007 [2024-07-15 22:27:00.223591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.007 [2024-07-15 22:27:00.223595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.007 [2024-07-15 22:27:00.223606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 22:27:00.233467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.007 [2024-07-15 22:27:00.233532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.007 [2024-07-15 22:27:00.233544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.007 [2024-07-15 22:27:00.233549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.007 [2024-07-15 22:27:00.233553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.007 [2024-07-15 22:27:00.233564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 22:27:00.243554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.007 [2024-07-15 22:27:00.243624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.007 [2024-07-15 22:27:00.243636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.007 [2024-07-15 22:27:00.243641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.007 [2024-07-15 22:27:00.243645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.007 [2024-07-15 22:27:00.243656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 22:27:00.253570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.007 [2024-07-15 22:27:00.253671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.007 [2024-07-15 22:27:00.253683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.007 [2024-07-15 22:27:00.253688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.007 [2024-07-15 22:27:00.253692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.007 [2024-07-15 22:27:00.253703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 22:27:00.263563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.007 [2024-07-15 22:27:00.263638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.007 [2024-07-15 22:27:00.263650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.007 [2024-07-15 22:27:00.263655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.007 [2024-07-15 22:27:00.263659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.007 [2024-07-15 22:27:00.263673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.008 qpair failed and we were unable to recover it. 00:29:35.008 [2024-07-15 22:27:00.273590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.008 [2024-07-15 22:27:00.273686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.008 [2024-07-15 22:27:00.273698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.008 [2024-07-15 22:27:00.273703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.008 [2024-07-15 22:27:00.273707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.008 [2024-07-15 22:27:00.273718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.008 qpair failed and we were unable to recover it. 00:29:35.008 [2024-07-15 22:27:00.283621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.008 [2024-07-15 22:27:00.283687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.008 [2024-07-15 22:27:00.283700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.008 [2024-07-15 22:27:00.283704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.008 [2024-07-15 22:27:00.283709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.008 [2024-07-15 22:27:00.283720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.008 qpair failed and we were unable to recover it. 00:29:35.008 [2024-07-15 22:27:00.293636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.008 [2024-07-15 22:27:00.293707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.008 [2024-07-15 22:27:00.293719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.008 [2024-07-15 22:27:00.293724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.008 [2024-07-15 22:27:00.293728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.008 [2024-07-15 22:27:00.293739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.008 qpair failed and we were unable to recover it. 00:29:35.008 [2024-07-15 22:27:00.303661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.008 [2024-07-15 22:27:00.303745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.008 [2024-07-15 22:27:00.303757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.008 [2024-07-15 22:27:00.303762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.008 [2024-07-15 22:27:00.303766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.008 [2024-07-15 22:27:00.303777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.008 qpair failed and we were unable to recover it. 00:29:35.008 [2024-07-15 22:27:00.313699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.008 [2024-07-15 22:27:00.313777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.008 [2024-07-15 22:27:00.313800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.008 [2024-07-15 22:27:00.313806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.008 [2024-07-15 22:27:00.313811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.008 [2024-07-15 22:27:00.313826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.008 qpair failed and we were unable to recover it. 00:29:35.008 [2024-07-15 22:27:00.323731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.008 [2024-07-15 22:27:00.323808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.008 [2024-07-15 22:27:00.323827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.008 [2024-07-15 22:27:00.323833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.008 [2024-07-15 22:27:00.323838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.008 [2024-07-15 22:27:00.323852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.008 qpair failed and we were unable to recover it. 00:29:35.270 [2024-07-15 22:27:00.333790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.270 [2024-07-15 22:27:00.333898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.270 [2024-07-15 22:27:00.333911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.270 [2024-07-15 22:27:00.333916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.270 [2024-07-15 22:27:00.333921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.270 [2024-07-15 22:27:00.333932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.271 qpair failed and we were unable to recover it. 00:29:35.271 [2024-07-15 22:27:00.343782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.271 [2024-07-15 22:27:00.343859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.271 [2024-07-15 22:27:00.343878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.271 [2024-07-15 22:27:00.343884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.271 [2024-07-15 22:27:00.343889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.271 [2024-07-15 22:27:00.343903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.271 qpair failed and we were unable to recover it. 00:29:35.271 [2024-07-15 22:27:00.353790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.271 [2024-07-15 22:27:00.353858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.271 [2024-07-15 22:27:00.353870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.271 [2024-07-15 22:27:00.353876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.271 [2024-07-15 22:27:00.353883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.271 [2024-07-15 22:27:00.353895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.271 qpair failed and we were unable to recover it. 00:29:35.271 [2024-07-15 22:27:00.363828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.271 [2024-07-15 22:27:00.363895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.271 [2024-07-15 22:27:00.363908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.271 [2024-07-15 22:27:00.363913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.271 [2024-07-15 22:27:00.363917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.271 [2024-07-15 22:27:00.363928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.271 qpair failed and we were unable to recover it. 00:29:35.271 [2024-07-15 22:27:00.373872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.271 [2024-07-15 22:27:00.373941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.271 [2024-07-15 22:27:00.373953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.271 [2024-07-15 22:27:00.373958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.271 [2024-07-15 22:27:00.373962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.271 [2024-07-15 22:27:00.373972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.271 qpair failed and we were unable to recover it. 00:29:35.271 [2024-07-15 22:27:00.383902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.271 [2024-07-15 22:27:00.383981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.271 [2024-07-15 22:27:00.383994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.271 [2024-07-15 22:27:00.383999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.271 [2024-07-15 22:27:00.384003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.271 [2024-07-15 22:27:00.384014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.271 qpair failed and we were unable to recover it. 00:29:35.271 [2024-07-15 22:27:00.393942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.271 [2024-07-15 22:27:00.394018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.271 [2024-07-15 22:27:00.394030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.271 [2024-07-15 22:27:00.394035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.271 [2024-07-15 22:27:00.394039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.271 [2024-07-15 22:27:00.394050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.271 qpair failed and we were unable to recover it. 00:29:35.271 [2024-07-15 22:27:00.403981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.271 [2024-07-15 22:27:00.404052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.271 [2024-07-15 22:27:00.404064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.271 [2024-07-15 22:27:00.404069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.271 [2024-07-15 22:27:00.404073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.271 [2024-07-15 22:27:00.404084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.271 qpair failed and we were unable to recover it. 00:29:35.271 [2024-07-15 22:27:00.414009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.271 [2024-07-15 22:27:00.414083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.271 [2024-07-15 22:27:00.414096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.271 [2024-07-15 22:27:00.414101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.271 [2024-07-15 22:27:00.414105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.271 [2024-07-15 22:27:00.414116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.271 qpair failed and we were unable to recover it. 00:29:35.271 [2024-07-15 22:27:00.423892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.271 [2024-07-15 22:27:00.423968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.271 [2024-07-15 22:27:00.423981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.271 [2024-07-15 22:27:00.423986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.271 [2024-07-15 22:27:00.423990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.271 [2024-07-15 22:27:00.424001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.271 qpair failed and we were unable to recover it. 00:29:35.271 [2024-07-15 22:27:00.434030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.271 [2024-07-15 22:27:00.434103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.271 [2024-07-15 22:27:00.434115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.271 [2024-07-15 22:27:00.434120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.271 [2024-07-15 22:27:00.434129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.271 [2024-07-15 22:27:00.434140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.271 qpair failed and we were unable to recover it. 00:29:35.271 [2024-07-15 22:27:00.444127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.271 [2024-07-15 22:27:00.444218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.271 [2024-07-15 22:27:00.444233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.271 [2024-07-15 22:27:00.444242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.271 [2024-07-15 22:27:00.444246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.271 [2024-07-15 22:27:00.444258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.271 qpair failed and we were unable to recover it. 00:29:35.271 [2024-07-15 22:27:00.454096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.271 [2024-07-15 22:27:00.454172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.271 [2024-07-15 22:27:00.454184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.271 [2024-07-15 22:27:00.454189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.271 [2024-07-15 22:27:00.454193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.271 [2024-07-15 22:27:00.454204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.271 qpair failed and we were unable to recover it. 00:29:35.271 [2024-07-15 22:27:00.464169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.271 [2024-07-15 22:27:00.464275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.271 [2024-07-15 22:27:00.464287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.271 [2024-07-15 22:27:00.464292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.271 [2024-07-15 22:27:00.464296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.271 [2024-07-15 22:27:00.464307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.271 qpair failed and we were unable to recover it. 00:29:35.271 [2024-07-15 22:27:00.474119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.271 [2024-07-15 22:27:00.474188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.271 [2024-07-15 22:27:00.474200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.271 [2024-07-15 22:27:00.474205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.271 [2024-07-15 22:27:00.474209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.271 [2024-07-15 22:27:00.474219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.272 qpair failed and we were unable to recover it. 00:29:35.272 [2024-07-15 22:27:00.484223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.272 [2024-07-15 22:27:00.484294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.272 [2024-07-15 22:27:00.484307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.272 [2024-07-15 22:27:00.484312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.272 [2024-07-15 22:27:00.484316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.272 [2024-07-15 22:27:00.484326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.272 qpair failed and we were unable to recover it. 00:29:35.272 [2024-07-15 22:27:00.494211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.272 [2024-07-15 22:27:00.494289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.272 [2024-07-15 22:27:00.494302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.272 [2024-07-15 22:27:00.494307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.272 [2024-07-15 22:27:00.494311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.272 [2024-07-15 22:27:00.494322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.272 qpair failed and we were unable to recover it. 00:29:35.272 [2024-07-15 22:27:00.504258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.272 [2024-07-15 22:27:00.504333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.272 [2024-07-15 22:27:00.504345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.272 [2024-07-15 22:27:00.504350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.272 [2024-07-15 22:27:00.504354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.272 [2024-07-15 22:27:00.504365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.272 qpair failed and we were unable to recover it. 00:29:35.272 [2024-07-15 22:27:00.514261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.272 [2024-07-15 22:27:00.514331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.272 [2024-07-15 22:27:00.514343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.272 [2024-07-15 22:27:00.514348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.272 [2024-07-15 22:27:00.514352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.272 [2024-07-15 22:27:00.514363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.272 qpair failed and we were unable to recover it. 00:29:35.272 [2024-07-15 22:27:00.524295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.272 [2024-07-15 22:27:00.524397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.272 [2024-07-15 22:27:00.524409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.272 [2024-07-15 22:27:00.524414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.272 [2024-07-15 22:27:00.524419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.272 [2024-07-15 22:27:00.524429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.272 qpair failed and we were unable to recover it. 00:29:35.272 [2024-07-15 22:27:00.534303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.272 [2024-07-15 22:27:00.534376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.272 [2024-07-15 22:27:00.534388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.272 [2024-07-15 22:27:00.534395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.272 [2024-07-15 22:27:00.534400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.272 [2024-07-15 22:27:00.534410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.272 qpair failed and we were unable to recover it. 00:29:35.272 [2024-07-15 22:27:00.544343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.272 [2024-07-15 22:27:00.544420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.272 [2024-07-15 22:27:00.544432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.272 [2024-07-15 22:27:00.544437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.272 [2024-07-15 22:27:00.544441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.272 [2024-07-15 22:27:00.544452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.272 qpair failed and we were unable to recover it. 00:29:35.272 [2024-07-15 22:27:00.554241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.272 [2024-07-15 22:27:00.554312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.272 [2024-07-15 22:27:00.554325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.272 [2024-07-15 22:27:00.554331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.272 [2024-07-15 22:27:00.554336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.272 [2024-07-15 22:27:00.554348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.272 qpair failed and we were unable to recover it. 00:29:35.272 [2024-07-15 22:27:00.564415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.272 [2024-07-15 22:27:00.564489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.272 [2024-07-15 22:27:00.564501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.272 [2024-07-15 22:27:00.564506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.272 [2024-07-15 22:27:00.564510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.272 [2024-07-15 22:27:00.564520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.272 qpair failed and we were unable to recover it. 00:29:35.272 [2024-07-15 22:27:00.574409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.272 [2024-07-15 22:27:00.574482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.272 [2024-07-15 22:27:00.574494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.272 [2024-07-15 22:27:00.574499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.272 [2024-07-15 22:27:00.574503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.272 [2024-07-15 22:27:00.574513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.272 qpair failed and we were unable to recover it. 00:29:35.272 [2024-07-15 22:27:00.584413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.272 [2024-07-15 22:27:00.584512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.272 [2024-07-15 22:27:00.584524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.272 [2024-07-15 22:27:00.584528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.272 [2024-07-15 22:27:00.584533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.272 [2024-07-15 22:27:00.584544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.272 qpair failed and we were unable to recover it. 00:29:35.535 [2024-07-15 22:27:00.594494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.535 [2024-07-15 22:27:00.594566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.535 [2024-07-15 22:27:00.594578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.535 [2024-07-15 22:27:00.594583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.535 [2024-07-15 22:27:00.594587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.535 [2024-07-15 22:27:00.594598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.535 qpair failed and we were unable to recover it. 00:29:35.535 [2024-07-15 22:27:00.604386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.535 [2024-07-15 22:27:00.604456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.535 [2024-07-15 22:27:00.604468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.535 [2024-07-15 22:27:00.604473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.535 [2024-07-15 22:27:00.604477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.535 [2024-07-15 22:27:00.604488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.535 qpair failed and we were unable to recover it. 00:29:35.535 [2024-07-15 22:27:00.614563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.535 [2024-07-15 22:27:00.614641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.535 [2024-07-15 22:27:00.614653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.535 [2024-07-15 22:27:00.614658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.535 [2024-07-15 22:27:00.614662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.535 [2024-07-15 22:27:00.614673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.535 qpair failed and we were unable to recover it. 00:29:35.535 [2024-07-15 22:27:00.624576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.535 [2024-07-15 22:27:00.624654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.535 [2024-07-15 22:27:00.624669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.535 [2024-07-15 22:27:00.624674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.535 [2024-07-15 22:27:00.624678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.535 [2024-07-15 22:27:00.624689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.535 qpair failed and we were unable to recover it. 00:29:35.535 [2024-07-15 22:27:00.634573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.535 [2024-07-15 22:27:00.634638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.535 [2024-07-15 22:27:00.634651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.535 [2024-07-15 22:27:00.634655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.535 [2024-07-15 22:27:00.634659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.535 [2024-07-15 22:27:00.634670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.535 qpair failed and we were unable to recover it. 00:29:35.535 [2024-07-15 22:27:00.644495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.535 [2024-07-15 22:27:00.644566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.535 [2024-07-15 22:27:00.644578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.535 [2024-07-15 22:27:00.644584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.535 [2024-07-15 22:27:00.644588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.535 [2024-07-15 22:27:00.644598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.535 qpair failed and we were unable to recover it. 00:29:35.535 [2024-07-15 22:27:00.654703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.535 [2024-07-15 22:27:00.654802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.535 [2024-07-15 22:27:00.654814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.535 [2024-07-15 22:27:00.654819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.535 [2024-07-15 22:27:00.654823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.535 [2024-07-15 22:27:00.654834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.535 qpair failed and we were unable to recover it. 00:29:35.535 [2024-07-15 22:27:00.664593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.535 [2024-07-15 22:27:00.664677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.535 [2024-07-15 22:27:00.664696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.535 [2024-07-15 22:27:00.664702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.535 [2024-07-15 22:27:00.664707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.535 [2024-07-15 22:27:00.664725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.535 qpair failed and we were unable to recover it. 00:29:35.535 [2024-07-15 22:27:00.674675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.535 [2024-07-15 22:27:00.674747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.535 [2024-07-15 22:27:00.674760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.535 [2024-07-15 22:27:00.674765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.535 [2024-07-15 22:27:00.674769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.535 [2024-07-15 22:27:00.674780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.535 qpair failed and we were unable to recover it. 00:29:35.535 [2024-07-15 22:27:00.684704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.535 [2024-07-15 22:27:00.684777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.535 [2024-07-15 22:27:00.684796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.535 [2024-07-15 22:27:00.684802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.535 [2024-07-15 22:27:00.684806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.535 [2024-07-15 22:27:00.684820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.535 qpair failed and we were unable to recover it. 00:29:35.535 [2024-07-15 22:27:00.694745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.535 [2024-07-15 22:27:00.694835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.535 [2024-07-15 22:27:00.694854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.535 [2024-07-15 22:27:00.694860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.535 [2024-07-15 22:27:00.694864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.536 [2024-07-15 22:27:00.694879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.536 qpair failed and we were unable to recover it. 00:29:35.536 [2024-07-15 22:27:00.704796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.536 [2024-07-15 22:27:00.704873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.536 [2024-07-15 22:27:00.704892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.536 [2024-07-15 22:27:00.704898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.536 [2024-07-15 22:27:00.704903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.536 [2024-07-15 22:27:00.704917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.536 qpair failed and we were unable to recover it. 00:29:35.536 [2024-07-15 22:27:00.714757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.536 [2024-07-15 22:27:00.714826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.536 [2024-07-15 22:27:00.714843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.536 [2024-07-15 22:27:00.714848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.536 [2024-07-15 22:27:00.714852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.536 [2024-07-15 22:27:00.714864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.536 qpair failed and we were unable to recover it. 00:29:35.536 [2024-07-15 22:27:00.724819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.536 [2024-07-15 22:27:00.724935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.536 [2024-07-15 22:27:00.724954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.536 [2024-07-15 22:27:00.724961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.536 [2024-07-15 22:27:00.724965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.536 [2024-07-15 22:27:00.724979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.536 qpair failed and we were unable to recover it. 00:29:35.536 [2024-07-15 22:27:00.734845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.536 [2024-07-15 22:27:00.734918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.536 [2024-07-15 22:27:00.734932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.536 [2024-07-15 22:27:00.734937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.536 [2024-07-15 22:27:00.734942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.536 [2024-07-15 22:27:00.734954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.536 qpair failed and we were unable to recover it. 00:29:35.536 [2024-07-15 22:27:00.744790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.536 [2024-07-15 22:27:00.744866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.536 [2024-07-15 22:27:00.744885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.536 [2024-07-15 22:27:00.744891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.536 [2024-07-15 22:27:00.744895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.536 [2024-07-15 22:27:00.744910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.536 qpair failed and we were unable to recover it. 00:29:35.536 [2024-07-15 22:27:00.754892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.536 [2024-07-15 22:27:00.754961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.536 [2024-07-15 22:27:00.754974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.536 [2024-07-15 22:27:00.754979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.536 [2024-07-15 22:27:00.754987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.536 [2024-07-15 22:27:00.754999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.536 qpair failed and we were unable to recover it. 00:29:35.536 [2024-07-15 22:27:00.764916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.536 [2024-07-15 22:27:00.764985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.536 [2024-07-15 22:27:00.764997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.536 [2024-07-15 22:27:00.765002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.536 [2024-07-15 22:27:00.765006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.536 [2024-07-15 22:27:00.765017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.536 qpair failed and we were unable to recover it. 00:29:35.536 [2024-07-15 22:27:00.775036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.536 [2024-07-15 22:27:00.775142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.536 [2024-07-15 22:27:00.775154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.536 [2024-07-15 22:27:00.775159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.536 [2024-07-15 22:27:00.775163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.536 [2024-07-15 22:27:00.775181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.536 qpair failed and we were unable to recover it. 00:29:35.536 [2024-07-15 22:27:00.785012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.536 [2024-07-15 22:27:00.785088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.536 [2024-07-15 22:27:00.785099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.536 [2024-07-15 22:27:00.785104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.536 [2024-07-15 22:27:00.785108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.536 [2024-07-15 22:27:00.785119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.536 qpair failed and we were unable to recover it. 00:29:35.536 [2024-07-15 22:27:00.795007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.536 [2024-07-15 22:27:00.795071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.536 [2024-07-15 22:27:00.795083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.536 [2024-07-15 22:27:00.795087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.536 [2024-07-15 22:27:00.795091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.536 [2024-07-15 22:27:00.795102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.536 qpair failed and we were unable to recover it. 00:29:35.536 [2024-07-15 22:27:00.805061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.536 [2024-07-15 22:27:00.805144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.536 [2024-07-15 22:27:00.805157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.536 [2024-07-15 22:27:00.805162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.536 [2024-07-15 22:27:00.805166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.536 [2024-07-15 22:27:00.805176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.536 qpair failed and we were unable to recover it. 00:29:35.536 [2024-07-15 22:27:00.815047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.536 [2024-07-15 22:27:00.815130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.536 [2024-07-15 22:27:00.815143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.536 [2024-07-15 22:27:00.815147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.536 [2024-07-15 22:27:00.815151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.536 [2024-07-15 22:27:00.815162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.536 qpair failed and we were unable to recover it. 00:29:35.536 [2024-07-15 22:27:00.825102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.536 [2024-07-15 22:27:00.825181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.536 [2024-07-15 22:27:00.825193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.536 [2024-07-15 22:27:00.825198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.536 [2024-07-15 22:27:00.825202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.536 [2024-07-15 22:27:00.825213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.536 qpair failed and we were unable to recover it. 00:29:35.536 [2024-07-15 22:27:00.835117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.536 [2024-07-15 22:27:00.835191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.536 [2024-07-15 22:27:00.835203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.536 [2024-07-15 22:27:00.835208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.536 [2024-07-15 22:27:00.835213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.536 [2024-07-15 22:27:00.835223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.537 qpair failed and we were unable to recover it. 00:29:35.537 [2024-07-15 22:27:00.845146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.537 [2024-07-15 22:27:00.845211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.537 [2024-07-15 22:27:00.845223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.537 [2024-07-15 22:27:00.845229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.537 [2024-07-15 22:27:00.845235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.537 [2024-07-15 22:27:00.845246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.537 qpair failed and we were unable to recover it. 00:29:35.537 [2024-07-15 22:27:00.855241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.537 [2024-07-15 22:27:00.855314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.537 [2024-07-15 22:27:00.855326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.537 [2024-07-15 22:27:00.855331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.537 [2024-07-15 22:27:00.855335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.537 [2024-07-15 22:27:00.855346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.537 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-15 22:27:00.865238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-15 22:27:00.865356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-15 22:27:00.865368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-15 22:27:00.865373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-15 22:27:00.865377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.799 [2024-07-15 22:27:00.865389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-15 22:27:00.875244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-15 22:27:00.875340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-15 22:27:00.875352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-15 22:27:00.875357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-15 22:27:00.875361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.799 [2024-07-15 22:27:00.875372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-15 22:27:00.885157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-15 22:27:00.885226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-15 22:27:00.885238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-15 22:27:00.885243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-15 22:27:00.885247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.799 [2024-07-15 22:27:00.885258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-15 22:27:00.895300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-15 22:27:00.895371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-15 22:27:00.895382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-15 22:27:00.895387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-15 22:27:00.895391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.799 [2024-07-15 22:27:00.895402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-15 22:27:00.905311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-15 22:27:00.905385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-15 22:27:00.905397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-15 22:27:00.905402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-15 22:27:00.905406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.799 [2024-07-15 22:27:00.905417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-15 22:27:00.915259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-15 22:27:00.915353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-15 22:27:00.915366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-15 22:27:00.915370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-15 22:27:00.915375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.799 [2024-07-15 22:27:00.915385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-15 22:27:00.925383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-15 22:27:00.925452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-15 22:27:00.925464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-15 22:27:00.925469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-15 22:27:00.925473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.799 [2024-07-15 22:27:00.925484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-15 22:27:00.935402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-15 22:27:00.935473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-15 22:27:00.935486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-15 22:27:00.935495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-15 22:27:00.935499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.799 [2024-07-15 22:27:00.935510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-15 22:27:00.945445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-15 22:27:00.945516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-15 22:27:00.945527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-15 22:27:00.945533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-15 22:27:00.945537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.799 [2024-07-15 22:27:00.945548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-15 22:27:00.955353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-15 22:27:00.955422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-15 22:27:00.955434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-15 22:27:00.955439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-15 22:27:00.955443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.799 [2024-07-15 22:27:00.955454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-15 22:27:00.965438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-15 22:27:00.965513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-15 22:27:00.965526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-15 22:27:00.965531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-15 22:27:00.965535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.799 [2024-07-15 22:27:00.965546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-15 22:27:00.975508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-15 22:27:00.975579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-15 22:27:00.975591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-15 22:27:00.975596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-15 22:27:00.975600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.799 [2024-07-15 22:27:00.975611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-15 22:27:00.985556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-15 22:27:00.985635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-15 22:27:00.985647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-15 22:27:00.985652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-15 22:27:00.985656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.799 [2024-07-15 22:27:00.985667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.800 qpair failed and we were unable to recover it. 00:29:35.800 [2024-07-15 22:27:00.995559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.800 [2024-07-15 22:27:00.995626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.800 [2024-07-15 22:27:00.995638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.800 [2024-07-15 22:27:00.995643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.800 [2024-07-15 22:27:00.995647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.800 [2024-07-15 22:27:00.995658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.800 qpair failed and we were unable to recover it. 00:29:35.800 [2024-07-15 22:27:01.005484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.800 [2024-07-15 22:27:01.005552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.800 [2024-07-15 22:27:01.005565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.800 [2024-07-15 22:27:01.005570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.800 [2024-07-15 22:27:01.005574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.800 [2024-07-15 22:27:01.005585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.800 qpair failed and we were unable to recover it. 00:29:35.800 [2024-07-15 22:27:01.015645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.800 [2024-07-15 22:27:01.015715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.800 [2024-07-15 22:27:01.015728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.800 [2024-07-15 22:27:01.015733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.800 [2024-07-15 22:27:01.015737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.800 [2024-07-15 22:27:01.015747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.800 qpair failed and we were unable to recover it. 00:29:35.800 [2024-07-15 22:27:01.025761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.800 [2024-07-15 22:27:01.025837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.800 [2024-07-15 22:27:01.025853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.800 [2024-07-15 22:27:01.025858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.800 [2024-07-15 22:27:01.025862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.800 [2024-07-15 22:27:01.025873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.800 qpair failed and we were unable to recover it. 00:29:35.800 [2024-07-15 22:27:01.035663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.800 [2024-07-15 22:27:01.035743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.800 [2024-07-15 22:27:01.035755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.800 [2024-07-15 22:27:01.035760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.800 [2024-07-15 22:27:01.035764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.800 [2024-07-15 22:27:01.035775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.800 qpair failed and we were unable to recover it. 00:29:35.800 [2024-07-15 22:27:01.045688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.800 [2024-07-15 22:27:01.045760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.800 [2024-07-15 22:27:01.045779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.800 [2024-07-15 22:27:01.045785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.800 [2024-07-15 22:27:01.045790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.800 [2024-07-15 22:27:01.045804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.800 qpair failed and we were unable to recover it. 00:29:35.800 [2024-07-15 22:27:01.055732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.800 [2024-07-15 22:27:01.055806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.800 [2024-07-15 22:27:01.055825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.800 [2024-07-15 22:27:01.055832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.800 [2024-07-15 22:27:01.055837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.800 [2024-07-15 22:27:01.055851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.800 qpair failed and we were unable to recover it. 00:29:35.800 [2024-07-15 22:27:01.065777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.800 [2024-07-15 22:27:01.065862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.800 [2024-07-15 22:27:01.065881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.800 [2024-07-15 22:27:01.065887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.800 [2024-07-15 22:27:01.065893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.800 [2024-07-15 22:27:01.065911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.800 qpair failed and we were unable to recover it. 00:29:35.800 [2024-07-15 22:27:01.075771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.800 [2024-07-15 22:27:01.075850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.800 [2024-07-15 22:27:01.075869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.800 [2024-07-15 22:27:01.075875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.800 [2024-07-15 22:27:01.075879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.800 [2024-07-15 22:27:01.075893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.800 qpair failed and we were unable to recover it. 00:29:35.800 [2024-07-15 22:27:01.085714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.800 [2024-07-15 22:27:01.085790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.800 [2024-07-15 22:27:01.085808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.800 [2024-07-15 22:27:01.085814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.800 [2024-07-15 22:27:01.085819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.800 [2024-07-15 22:27:01.085833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.800 qpair failed and we were unable to recover it. 00:29:35.800 [2024-07-15 22:27:01.095932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.800 [2024-07-15 22:27:01.096042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.800 [2024-07-15 22:27:01.096055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.800 [2024-07-15 22:27:01.096060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.800 [2024-07-15 22:27:01.096065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.800 [2024-07-15 22:27:01.096076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.800 qpair failed and we were unable to recover it. 00:29:35.800 [2024-07-15 22:27:01.105864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.800 [2024-07-15 22:27:01.105957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.800 [2024-07-15 22:27:01.105970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.800 [2024-07-15 22:27:01.105975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.800 [2024-07-15 22:27:01.105979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.800 [2024-07-15 22:27:01.105989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.800 qpair failed and we were unable to recover it. 00:29:35.800 [2024-07-15 22:27:01.115883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.800 [2024-07-15 22:27:01.115959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.800 [2024-07-15 22:27:01.115981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.800 [2024-07-15 22:27:01.115988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.800 [2024-07-15 22:27:01.115992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:35.800 [2024-07-15 22:27:01.116007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.800 qpair failed and we were unable to recover it. 00:29:36.064 [2024-07-15 22:27:01.125915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.064 [2024-07-15 22:27:01.125983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.064 [2024-07-15 22:27:01.125996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.064 [2024-07-15 22:27:01.126001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.064 [2024-07-15 22:27:01.126006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.064 [2024-07-15 22:27:01.126018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.064 qpair failed and we were unable to recover it. 00:29:36.064 [2024-07-15 22:27:01.135938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.064 [2024-07-15 22:27:01.136006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.064 [2024-07-15 22:27:01.136018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.064 [2024-07-15 22:27:01.136024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.064 [2024-07-15 22:27:01.136028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.064 [2024-07-15 22:27:01.136040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.064 qpair failed and we were unable to recover it. 00:29:36.064 [2024-07-15 22:27:01.145974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.064 [2024-07-15 22:27:01.146071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.064 [2024-07-15 22:27:01.146083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.064 [2024-07-15 22:27:01.146088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.064 [2024-07-15 22:27:01.146092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.064 [2024-07-15 22:27:01.146103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.064 qpair failed and we were unable to recover it. 00:29:36.064 [2024-07-15 22:27:01.155990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.064 [2024-07-15 22:27:01.156057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.064 [2024-07-15 22:27:01.156069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.064 [2024-07-15 22:27:01.156075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.064 [2024-07-15 22:27:01.156082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.064 [2024-07-15 22:27:01.156093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.064 qpair failed and we were unable to recover it. 00:29:36.064 [2024-07-15 22:27:01.166031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.064 [2024-07-15 22:27:01.166098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.064 [2024-07-15 22:27:01.166110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.064 [2024-07-15 22:27:01.166115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.064 [2024-07-15 22:27:01.166119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.064 [2024-07-15 22:27:01.166134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.064 qpair failed and we were unable to recover it. 00:29:36.064 [2024-07-15 22:27:01.176046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.064 [2024-07-15 22:27:01.176115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.064 [2024-07-15 22:27:01.176131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.064 [2024-07-15 22:27:01.176136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.064 [2024-07-15 22:27:01.176140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.064 [2024-07-15 22:27:01.176151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.064 qpair failed and we were unable to recover it. 00:29:36.064 [2024-07-15 22:27:01.185997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.064 [2024-07-15 22:27:01.186073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.064 [2024-07-15 22:27:01.186086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.064 [2024-07-15 22:27:01.186091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.064 [2024-07-15 22:27:01.186097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.064 [2024-07-15 22:27:01.186109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.064 qpair failed and we were unable to recover it. 00:29:36.064 [2024-07-15 22:27:01.196125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.064 [2024-07-15 22:27:01.196191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.064 [2024-07-15 22:27:01.196203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.064 [2024-07-15 22:27:01.196208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.064 [2024-07-15 22:27:01.196212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.064 [2024-07-15 22:27:01.196223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.064 qpair failed and we were unable to recover it. 00:29:36.064 [2024-07-15 22:27:01.206260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.064 [2024-07-15 22:27:01.206334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.064 [2024-07-15 22:27:01.206346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.064 [2024-07-15 22:27:01.206351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.064 [2024-07-15 22:27:01.206355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.064 [2024-07-15 22:27:01.206366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.064 qpair failed and we were unable to recover it. 00:29:36.064 [2024-07-15 22:27:01.216191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.064 [2024-07-15 22:27:01.216270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.064 [2024-07-15 22:27:01.216283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.064 [2024-07-15 22:27:01.216288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.064 [2024-07-15 22:27:01.216292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.064 [2024-07-15 22:27:01.216303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.064 qpair failed and we were unable to recover it. 00:29:36.064 [2024-07-15 22:27:01.226179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.064 [2024-07-15 22:27:01.226258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.064 [2024-07-15 22:27:01.226270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.064 [2024-07-15 22:27:01.226275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.064 [2024-07-15 22:27:01.226279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.064 [2024-07-15 22:27:01.226290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.064 qpair failed and we were unable to recover it. 00:29:36.064 [2024-07-15 22:27:01.236269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.064 [2024-07-15 22:27:01.236350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.064 [2024-07-15 22:27:01.236362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.064 [2024-07-15 22:27:01.236367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.064 [2024-07-15 22:27:01.236371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.064 [2024-07-15 22:27:01.236382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.064 qpair failed and we were unable to recover it. 00:29:36.064 [2024-07-15 22:27:01.246169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.064 [2024-07-15 22:27:01.246238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.064 [2024-07-15 22:27:01.246251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.064 [2024-07-15 22:27:01.246255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.064 [2024-07-15 22:27:01.246262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.064 [2024-07-15 22:27:01.246273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.064 qpair failed and we were unable to recover it. 00:29:36.064 [2024-07-15 22:27:01.256294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.064 [2024-07-15 22:27:01.256367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.064 [2024-07-15 22:27:01.256379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.064 [2024-07-15 22:27:01.256384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.064 [2024-07-15 22:27:01.256388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.065 [2024-07-15 22:27:01.256399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.065 qpair failed and we were unable to recover it. 00:29:36.065 [2024-07-15 22:27:01.266354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.065 [2024-07-15 22:27:01.266428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.065 [2024-07-15 22:27:01.266440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.065 [2024-07-15 22:27:01.266445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.065 [2024-07-15 22:27:01.266449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.065 [2024-07-15 22:27:01.266460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.065 qpair failed and we were unable to recover it. 00:29:36.065 [2024-07-15 22:27:01.276347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.065 [2024-07-15 22:27:01.276415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.065 [2024-07-15 22:27:01.276427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.065 [2024-07-15 22:27:01.276431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.065 [2024-07-15 22:27:01.276435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.065 [2024-07-15 22:27:01.276446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.065 qpair failed and we were unable to recover it. 00:29:36.065 [2024-07-15 22:27:01.286366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.065 [2024-07-15 22:27:01.286460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.065 [2024-07-15 22:27:01.286471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.065 [2024-07-15 22:27:01.286476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.065 [2024-07-15 22:27:01.286480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.065 [2024-07-15 22:27:01.286491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.065 qpair failed and we were unable to recover it. 00:29:36.065 [2024-07-15 22:27:01.296395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.065 [2024-07-15 22:27:01.296474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.065 [2024-07-15 22:27:01.296486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.065 [2024-07-15 22:27:01.296491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.065 [2024-07-15 22:27:01.296495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.065 [2024-07-15 22:27:01.296506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.065 qpair failed and we were unable to recover it. 00:29:36.065 [2024-07-15 22:27:01.306448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.065 [2024-07-15 22:27:01.306526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.065 [2024-07-15 22:27:01.306538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.065 [2024-07-15 22:27:01.306543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.065 [2024-07-15 22:27:01.306547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.065 [2024-07-15 22:27:01.306558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.065 qpair failed and we were unable to recover it. 00:29:36.065 [2024-07-15 22:27:01.316528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.065 [2024-07-15 22:27:01.316633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.065 [2024-07-15 22:27:01.316645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.065 [2024-07-15 22:27:01.316650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.065 [2024-07-15 22:27:01.316655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.065 [2024-07-15 22:27:01.316665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.065 qpair failed and we were unable to recover it. 00:29:36.065 [2024-07-15 22:27:01.326473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.065 [2024-07-15 22:27:01.326542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.065 [2024-07-15 22:27:01.326555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.065 [2024-07-15 22:27:01.326560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.065 [2024-07-15 22:27:01.326564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.065 [2024-07-15 22:27:01.326575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.065 qpair failed and we were unable to recover it. 00:29:36.065 [2024-07-15 22:27:01.336496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.065 [2024-07-15 22:27:01.336564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.065 [2024-07-15 22:27:01.336576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.065 [2024-07-15 22:27:01.336584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.065 [2024-07-15 22:27:01.336588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.065 [2024-07-15 22:27:01.336598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.065 qpair failed and we were unable to recover it. 00:29:36.065 [2024-07-15 22:27:01.346523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.065 [2024-07-15 22:27:01.346617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.065 [2024-07-15 22:27:01.346629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.065 [2024-07-15 22:27:01.346635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.065 [2024-07-15 22:27:01.346639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.065 [2024-07-15 22:27:01.346649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.065 qpair failed and we were unable to recover it. 00:29:36.065 [2024-07-15 22:27:01.356436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.065 [2024-07-15 22:27:01.356503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.065 [2024-07-15 22:27:01.356516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.065 [2024-07-15 22:27:01.356521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.065 [2024-07-15 22:27:01.356525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.065 [2024-07-15 22:27:01.356536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.065 qpair failed and we were unable to recover it. 00:29:36.065 [2024-07-15 22:27:01.366608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.065 [2024-07-15 22:27:01.366723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.065 [2024-07-15 22:27:01.366736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.065 [2024-07-15 22:27:01.366741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.065 [2024-07-15 22:27:01.366745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.065 [2024-07-15 22:27:01.366756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.065 qpair failed and we were unable to recover it. 00:29:36.065 [2024-07-15 22:27:01.376611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.065 [2024-07-15 22:27:01.376681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.065 [2024-07-15 22:27:01.376694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.065 [2024-07-15 22:27:01.376699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.065 [2024-07-15 22:27:01.376703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.065 [2024-07-15 22:27:01.376714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.065 qpair failed and we were unable to recover it. 00:29:36.065 [2024-07-15 22:27:01.386569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.065 [2024-07-15 22:27:01.386671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.065 [2024-07-15 22:27:01.386683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.065 [2024-07-15 22:27:01.386689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.065 [2024-07-15 22:27:01.386693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.065 [2024-07-15 22:27:01.386703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.065 qpair failed and we were unable to recover it. 00:29:36.328 [2024-07-15 22:27:01.396672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.328 [2024-07-15 22:27:01.396750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.328 [2024-07-15 22:27:01.396762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.328 [2024-07-15 22:27:01.396767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.328 [2024-07-15 22:27:01.396771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.328 [2024-07-15 22:27:01.396781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.328 qpair failed and we were unable to recover it. 00:29:36.328 [2024-07-15 22:27:01.406594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.328 [2024-07-15 22:27:01.406663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.328 [2024-07-15 22:27:01.406682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.328 [2024-07-15 22:27:01.406688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.328 [2024-07-15 22:27:01.406693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.328 [2024-07-15 22:27:01.406707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.328 qpair failed and we were unable to recover it. 00:29:36.328 [2024-07-15 22:27:01.416708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.328 [2024-07-15 22:27:01.416783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.328 [2024-07-15 22:27:01.416796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.328 [2024-07-15 22:27:01.416802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.328 [2024-07-15 22:27:01.416806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.328 [2024-07-15 22:27:01.416818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.328 qpair failed and we were unable to recover it. 00:29:36.328 [2024-07-15 22:27:01.426782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.328 [2024-07-15 22:27:01.426901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.328 [2024-07-15 22:27:01.426924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.328 [2024-07-15 22:27:01.426930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.328 [2024-07-15 22:27:01.426935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.328 [2024-07-15 22:27:01.426949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.328 qpair failed and we were unable to recover it. 00:29:36.328 [2024-07-15 22:27:01.436832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.328 [2024-07-15 22:27:01.436900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.328 [2024-07-15 22:27:01.436919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.328 [2024-07-15 22:27:01.436926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.328 [2024-07-15 22:27:01.436930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.328 [2024-07-15 22:27:01.436944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.328 qpair failed and we were unable to recover it. 00:29:36.328 [2024-07-15 22:27:01.446716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.328 [2024-07-15 22:27:01.446830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.328 [2024-07-15 22:27:01.446843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.328 [2024-07-15 22:27:01.446848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.328 [2024-07-15 22:27:01.446853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.328 [2024-07-15 22:27:01.446864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.328 qpair failed and we were unable to recover it. 00:29:36.328 [2024-07-15 22:27:01.456861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.328 [2024-07-15 22:27:01.456931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.328 [2024-07-15 22:27:01.456943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.328 [2024-07-15 22:27:01.456948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.328 [2024-07-15 22:27:01.456952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.328 [2024-07-15 22:27:01.456963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.328 qpair failed and we were unable to recover it. 00:29:36.328 [2024-07-15 22:27:01.466854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.328 [2024-07-15 22:27:01.466970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.328 [2024-07-15 22:27:01.466982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.328 [2024-07-15 22:27:01.466986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.328 [2024-07-15 22:27:01.466990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.329 [2024-07-15 22:27:01.467004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.329 qpair failed and we were unable to recover it. 00:29:36.329 [2024-07-15 22:27:01.476885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.329 [2024-07-15 22:27:01.476952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.329 [2024-07-15 22:27:01.476964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.329 [2024-07-15 22:27:01.476969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.329 [2024-07-15 22:27:01.476974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.329 [2024-07-15 22:27:01.476984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.329 qpair failed and we were unable to recover it. 00:29:36.329 [2024-07-15 22:27:01.486906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.329 [2024-07-15 22:27:01.486976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.329 [2024-07-15 22:27:01.486988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.329 [2024-07-15 22:27:01.486993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.329 [2024-07-15 22:27:01.486997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.329 [2024-07-15 22:27:01.487008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.329 qpair failed and we were unable to recover it. 00:29:36.329 [2024-07-15 22:27:01.496932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.329 [2024-07-15 22:27:01.497003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.329 [2024-07-15 22:27:01.497015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.329 [2024-07-15 22:27:01.497020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.329 [2024-07-15 22:27:01.497024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.329 [2024-07-15 22:27:01.497035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.329 qpair failed and we were unable to recover it. 00:29:36.329 [2024-07-15 22:27:01.506959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.329 [2024-07-15 22:27:01.507034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.329 [2024-07-15 22:27:01.507046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.329 [2024-07-15 22:27:01.507051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.329 [2024-07-15 22:27:01.507055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.329 [2024-07-15 22:27:01.507066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.329 qpair failed and we were unable to recover it. 00:29:36.329 [2024-07-15 22:27:01.516972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.329 [2024-07-15 22:27:01.517043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.329 [2024-07-15 22:27:01.517058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.329 [2024-07-15 22:27:01.517063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.329 [2024-07-15 22:27:01.517067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.329 [2024-07-15 22:27:01.517078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.329 qpair failed and we were unable to recover it. 00:29:36.329 [2024-07-15 22:27:01.527010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.329 [2024-07-15 22:27:01.527088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.329 [2024-07-15 22:27:01.527100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.329 [2024-07-15 22:27:01.527105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.329 [2024-07-15 22:27:01.527109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.329 [2024-07-15 22:27:01.527120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.329 qpair failed and we were unable to recover it. 00:29:36.329 [2024-07-15 22:27:01.537054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.329 [2024-07-15 22:27:01.537157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.329 [2024-07-15 22:27:01.537169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.329 [2024-07-15 22:27:01.537175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.329 [2024-07-15 22:27:01.537179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.329 [2024-07-15 22:27:01.537190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.329 qpair failed and we were unable to recover it. 00:29:36.329 [2024-07-15 22:27:01.547070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.329 [2024-07-15 22:27:01.547149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.329 [2024-07-15 22:27:01.547161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.329 [2024-07-15 22:27:01.547166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.329 [2024-07-15 22:27:01.547170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.329 [2024-07-15 22:27:01.547181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.329 qpair failed and we were unable to recover it. 00:29:36.329 [2024-07-15 22:27:01.557090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.329 [2024-07-15 22:27:01.557163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.329 [2024-07-15 22:27:01.557175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.329 [2024-07-15 22:27:01.557180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.329 [2024-07-15 22:27:01.557184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.329 [2024-07-15 22:27:01.557200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.329 qpair failed and we were unable to recover it. 00:29:36.329 [2024-07-15 22:27:01.567118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.329 [2024-07-15 22:27:01.567189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.329 [2024-07-15 22:27:01.567202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.329 [2024-07-15 22:27:01.567206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.329 [2024-07-15 22:27:01.567210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.329 [2024-07-15 22:27:01.567221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.329 qpair failed and we were unable to recover it. 00:29:36.329 [2024-07-15 22:27:01.577199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.329 [2024-07-15 22:27:01.577294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.329 [2024-07-15 22:27:01.577306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.329 [2024-07-15 22:27:01.577311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.329 [2024-07-15 22:27:01.577315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.329 [2024-07-15 22:27:01.577326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.329 qpair failed and we were unable to recover it. 00:29:36.329 [2024-07-15 22:27:01.587179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.329 [2024-07-15 22:27:01.587252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.329 [2024-07-15 22:27:01.587265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.329 [2024-07-15 22:27:01.587269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.329 [2024-07-15 22:27:01.587273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.329 [2024-07-15 22:27:01.587284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.329 qpair failed and we were unable to recover it. 00:29:36.329 [2024-07-15 22:27:01.597110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.329 [2024-07-15 22:27:01.597186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.329 [2024-07-15 22:27:01.597199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.329 [2024-07-15 22:27:01.597204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.329 [2024-07-15 22:27:01.597208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.329 [2024-07-15 22:27:01.597219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.329 qpair failed and we were unable to recover it. 00:29:36.329 [2024-07-15 22:27:01.607302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.329 [2024-07-15 22:27:01.607375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.329 [2024-07-15 22:27:01.607388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.330 [2024-07-15 22:27:01.607392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.330 [2024-07-15 22:27:01.607397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.330 [2024-07-15 22:27:01.607407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.330 qpair failed and we were unable to recover it. 00:29:36.330 [2024-07-15 22:27:01.617280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.330 [2024-07-15 22:27:01.617356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.330 [2024-07-15 22:27:01.617368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.330 [2024-07-15 22:27:01.617373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.330 [2024-07-15 22:27:01.617377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.330 [2024-07-15 22:27:01.617388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.330 qpair failed and we were unable to recover it. 00:29:36.330 [2024-07-15 22:27:01.627302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.330 [2024-07-15 22:27:01.627379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.330 [2024-07-15 22:27:01.627391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.330 [2024-07-15 22:27:01.627396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.330 [2024-07-15 22:27:01.627400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.330 [2024-07-15 22:27:01.627411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.330 qpair failed and we were unable to recover it. 00:29:36.330 [2024-07-15 22:27:01.637356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.330 [2024-07-15 22:27:01.637428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.330 [2024-07-15 22:27:01.637440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.330 [2024-07-15 22:27:01.637445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.330 [2024-07-15 22:27:01.637449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.330 [2024-07-15 22:27:01.637460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.330 qpair failed and we were unable to recover it. 00:29:36.330 [2024-07-15 22:27:01.647396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.330 [2024-07-15 22:27:01.647461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.330 [2024-07-15 22:27:01.647473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.330 [2024-07-15 22:27:01.647479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.330 [2024-07-15 22:27:01.647486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.330 [2024-07-15 22:27:01.647497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.330 qpair failed and we were unable to recover it. 00:29:36.591 [2024-07-15 22:27:01.657408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.591 [2024-07-15 22:27:01.657488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.591 [2024-07-15 22:27:01.657501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.591 [2024-07-15 22:27:01.657506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.591 [2024-07-15 22:27:01.657510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.591 [2024-07-15 22:27:01.657520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.591 qpair failed and we were unable to recover it. 00:29:36.591 [2024-07-15 22:27:01.667319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.591 [2024-07-15 22:27:01.667395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.591 [2024-07-15 22:27:01.667408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.591 [2024-07-15 22:27:01.667413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.591 [2024-07-15 22:27:01.667417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.591 [2024-07-15 22:27:01.667429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.591 qpair failed and we were unable to recover it. 00:29:36.591 [2024-07-15 22:27:01.677482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.591 [2024-07-15 22:27:01.677552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.591 [2024-07-15 22:27:01.677565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.591 [2024-07-15 22:27:01.677570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.591 [2024-07-15 22:27:01.677574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.591 [2024-07-15 22:27:01.677584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.591 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.687530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.687623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.687636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.687641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.687645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.687656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.697523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.697595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.697607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.697612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.697617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.697627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.707545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.707618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.707630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.707635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.707639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.707650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.717566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.717640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.717652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.717657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.717661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.717672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.727608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.727677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.727689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.727694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.727699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.727709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.737651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.737727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.737746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.737756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.737761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.737775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.747639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.747716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.747734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.747740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.747745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.747759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.757659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.757735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.757748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.757753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.757757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.757768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.767727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.767804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.767823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.767829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.767834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.767848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.777782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.777885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.777898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.777903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.777908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.777919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.787762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.787842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.787862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.787868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.787872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.787886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.797801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.797874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.797893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.797899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.797904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.797918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.807840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.807915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.807934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.807940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.807945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.807959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.817891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.817968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.817981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.817986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.817991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.818002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.827925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.827999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.828012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.828021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.828025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.828036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.837956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.838025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.838037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.838042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.838046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.838057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.847922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.847995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.848007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.848012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.848016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.848027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.857991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.858060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.858072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.858077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.858081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.858092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.867993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.868068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.868079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.868084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.868089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.868099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.877933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.878004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.878016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.878021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.878025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.878036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.887900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.887970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.887982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.887987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.887991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.888002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.898066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.592 [2024-07-15 22:27:01.898138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.592 [2024-07-15 22:27:01.898150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.592 [2024-07-15 22:27:01.898156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.592 [2024-07-15 22:27:01.898160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.592 [2024-07-15 22:27:01.898170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.592 qpair failed and we were unable to recover it. 00:29:36.592 [2024-07-15 22:27:01.908117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.593 [2024-07-15 22:27:01.908189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.593 [2024-07-15 22:27:01.908201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.593 [2024-07-15 22:27:01.908206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.593 [2024-07-15 22:27:01.908210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.593 [2024-07-15 22:27:01.908221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.593 qpair failed and we were unable to recover it. 00:29:36.854 [2024-07-15 22:27:01.918013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.854 [2024-07-15 22:27:01.918192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.854 [2024-07-15 22:27:01.918208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.854 [2024-07-15 22:27:01.918213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.854 [2024-07-15 22:27:01.918217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.854 [2024-07-15 22:27:01.918229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.854 qpair failed and we were unable to recover it. 00:29:36.854 [2024-07-15 22:27:01.928124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.855 [2024-07-15 22:27:01.928185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.855 [2024-07-15 22:27:01.928197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.855 [2024-07-15 22:27:01.928202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.855 [2024-07-15 22:27:01.928206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.855 [2024-07-15 22:27:01.928216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.855 qpair failed and we were unable to recover it. 00:29:36.855 [2024-07-15 22:27:01.938178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.855 [2024-07-15 22:27:01.938249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.855 [2024-07-15 22:27:01.938261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.855 [2024-07-15 22:27:01.938266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.855 [2024-07-15 22:27:01.938270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.855 [2024-07-15 22:27:01.938282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.855 qpair failed and we were unable to recover it. 00:29:36.855 [2024-07-15 22:27:01.948197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.855 [2024-07-15 22:27:01.948273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.855 [2024-07-15 22:27:01.948285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.855 [2024-07-15 22:27:01.948290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.855 [2024-07-15 22:27:01.948294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.855 [2024-07-15 22:27:01.948305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.855 qpair failed and we were unable to recover it. 00:29:36.855 [2024-07-15 22:27:01.958136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.855 [2024-07-15 22:27:01.958207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.855 [2024-07-15 22:27:01.958219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.855 [2024-07-15 22:27:01.958224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.855 [2024-07-15 22:27:01.958228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.855 [2024-07-15 22:27:01.958241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.855 qpair failed and we were unable to recover it. 00:29:36.855 [2024-07-15 22:27:01.968221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.855 [2024-07-15 22:27:01.968287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.855 [2024-07-15 22:27:01.968299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.855 [2024-07-15 22:27:01.968304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.855 [2024-07-15 22:27:01.968308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.855 [2024-07-15 22:27:01.968319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.855 qpair failed and we were unable to recover it. 00:29:36.855 [2024-07-15 22:27:01.978307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.855 [2024-07-15 22:27:01.978424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.855 [2024-07-15 22:27:01.978436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.855 [2024-07-15 22:27:01.978441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.855 [2024-07-15 22:27:01.978445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.855 [2024-07-15 22:27:01.978456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.855 qpair failed and we were unable to recover it. 00:29:36.855 [2024-07-15 22:27:01.988245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.855 [2024-07-15 22:27:01.988316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.855 [2024-07-15 22:27:01.988328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.855 [2024-07-15 22:27:01.988333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.855 [2024-07-15 22:27:01.988337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.855 [2024-07-15 22:27:01.988348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.855 qpair failed and we were unable to recover it. 00:29:36.855 [2024-07-15 22:27:01.998360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.855 [2024-07-15 22:27:01.998433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.855 [2024-07-15 22:27:01.998445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.855 [2024-07-15 22:27:01.998450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.855 [2024-07-15 22:27:01.998454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.855 [2024-07-15 22:27:01.998465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.855 qpair failed and we were unable to recover it. 00:29:36.855 [2024-07-15 22:27:02.008369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.855 [2024-07-15 22:27:02.008435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.855 [2024-07-15 22:27:02.008449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.855 [2024-07-15 22:27:02.008454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.855 [2024-07-15 22:27:02.008458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.855 [2024-07-15 22:27:02.008469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.855 qpair failed and we were unable to recover it. 00:29:36.855 [2024-07-15 22:27:02.018427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.855 [2024-07-15 22:27:02.018501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.855 [2024-07-15 22:27:02.018513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.855 [2024-07-15 22:27:02.018518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.855 [2024-07-15 22:27:02.018522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.855 [2024-07-15 22:27:02.018533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.855 qpair failed and we were unable to recover it. 00:29:36.855 [2024-07-15 22:27:02.028467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.855 [2024-07-15 22:27:02.028538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.855 [2024-07-15 22:27:02.028550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.855 [2024-07-15 22:27:02.028555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.855 [2024-07-15 22:27:02.028559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.855 [2024-07-15 22:27:02.028569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.855 qpair failed and we were unable to recover it. 00:29:36.855 [2024-07-15 22:27:02.038476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.855 [2024-07-15 22:27:02.038544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.855 [2024-07-15 22:27:02.038556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.855 [2024-07-15 22:27:02.038561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.855 [2024-07-15 22:27:02.038565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.855 [2024-07-15 22:27:02.038575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.855 qpair failed and we were unable to recover it. 00:29:36.855 [2024-07-15 22:27:02.048447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.855 [2024-07-15 22:27:02.048512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.855 [2024-07-15 22:27:02.048523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.855 [2024-07-15 22:27:02.048529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.855 [2024-07-15 22:27:02.048535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.855 [2024-07-15 22:27:02.048546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.855 qpair failed and we were unable to recover it. 00:29:36.855 [2024-07-15 22:27:02.058537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.855 [2024-07-15 22:27:02.058605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.855 [2024-07-15 22:27:02.058617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.855 [2024-07-15 22:27:02.058622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.855 [2024-07-15 22:27:02.058626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.855 [2024-07-15 22:27:02.058636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.855 qpair failed and we were unable to recover it. 00:29:36.855 [2024-07-15 22:27:02.068512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.856 [2024-07-15 22:27:02.068586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.856 [2024-07-15 22:27:02.068597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.856 [2024-07-15 22:27:02.068603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.856 [2024-07-15 22:27:02.068607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.856 [2024-07-15 22:27:02.068618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.856 qpair failed and we were unable to recover it. 00:29:36.856 [2024-07-15 22:27:02.078587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.856 [2024-07-15 22:27:02.078654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.856 [2024-07-15 22:27:02.078666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.856 [2024-07-15 22:27:02.078671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.856 [2024-07-15 22:27:02.078675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.856 [2024-07-15 22:27:02.078686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.856 qpair failed and we were unable to recover it. 00:29:36.856 [2024-07-15 22:27:02.088556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.856 [2024-07-15 22:27:02.088624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.856 [2024-07-15 22:27:02.088636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.856 [2024-07-15 22:27:02.088641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.856 [2024-07-15 22:27:02.088645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.856 [2024-07-15 22:27:02.088655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.856 qpair failed and we were unable to recover it. 00:29:36.856 [2024-07-15 22:27:02.098672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.856 [2024-07-15 22:27:02.098746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.856 [2024-07-15 22:27:02.098758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.856 [2024-07-15 22:27:02.098763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.856 [2024-07-15 22:27:02.098767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.856 [2024-07-15 22:27:02.098778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.856 qpair failed and we were unable to recover it. 00:29:36.856 [2024-07-15 22:27:02.108661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.856 [2024-07-15 22:27:02.108737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.856 [2024-07-15 22:27:02.108756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.856 [2024-07-15 22:27:02.108762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.856 [2024-07-15 22:27:02.108766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.856 [2024-07-15 22:27:02.108780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.856 qpair failed and we were unable to recover it. 00:29:36.856 [2024-07-15 22:27:02.118608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.856 [2024-07-15 22:27:02.118706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.856 [2024-07-15 22:27:02.118719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.856 [2024-07-15 22:27:02.118725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.856 [2024-07-15 22:27:02.118729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.856 [2024-07-15 22:27:02.118740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.856 qpair failed and we were unable to recover it. 00:29:36.856 [2024-07-15 22:27:02.128706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.856 [2024-07-15 22:27:02.128779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.856 [2024-07-15 22:27:02.128798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.856 [2024-07-15 22:27:02.128804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.856 [2024-07-15 22:27:02.128809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.856 [2024-07-15 22:27:02.128823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.856 qpair failed and we were unable to recover it. 00:29:36.856 [2024-07-15 22:27:02.138882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.856 [2024-07-15 22:27:02.138965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.856 [2024-07-15 22:27:02.138984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.856 [2024-07-15 22:27:02.138994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.856 [2024-07-15 22:27:02.138998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.856 [2024-07-15 22:27:02.139012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.856 qpair failed and we were unable to recover it. 00:29:36.856 [2024-07-15 22:27:02.148778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.856 [2024-07-15 22:27:02.148848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.856 [2024-07-15 22:27:02.148861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.856 [2024-07-15 22:27:02.148867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.856 [2024-07-15 22:27:02.148871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.856 [2024-07-15 22:27:02.148882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.856 qpair failed and we were unable to recover it. 00:29:36.856 [2024-07-15 22:27:02.158874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.856 [2024-07-15 22:27:02.158954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.856 [2024-07-15 22:27:02.158967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.856 [2024-07-15 22:27:02.158972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.856 [2024-07-15 22:27:02.158976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.856 [2024-07-15 22:27:02.158987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.856 qpair failed and we were unable to recover it. 00:29:36.856 [2024-07-15 22:27:02.168826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.856 [2024-07-15 22:27:02.168895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.856 [2024-07-15 22:27:02.168908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.856 [2024-07-15 22:27:02.168916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.856 [2024-07-15 22:27:02.168921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:36.856 [2024-07-15 22:27:02.168932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.856 qpair failed and we were unable to recover it. 00:29:37.118 [2024-07-15 22:27:02.178872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.118 [2024-07-15 22:27:02.178950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.118 [2024-07-15 22:27:02.178963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.118 [2024-07-15 22:27:02.178968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.118 [2024-07-15 22:27:02.178972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.118 [2024-07-15 22:27:02.178982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.118 qpair failed and we were unable to recover it. 00:29:37.118 [2024-07-15 22:27:02.188868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.118 [2024-07-15 22:27:02.188939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.118 [2024-07-15 22:27:02.188951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.118 [2024-07-15 22:27:02.188956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.118 [2024-07-15 22:27:02.188960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.118 [2024-07-15 22:27:02.188971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.118 qpair failed and we were unable to recover it. 00:29:37.118 [2024-07-15 22:27:02.198913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.118 [2024-07-15 22:27:02.198977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.118 [2024-07-15 22:27:02.198989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.118 [2024-07-15 22:27:02.198994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.118 [2024-07-15 22:27:02.198998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.118 [2024-07-15 22:27:02.199009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.118 qpair failed and we were unable to recover it. 00:29:37.118 [2024-07-15 22:27:02.208914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.118 [2024-07-15 22:27:02.208978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.118 [2024-07-15 22:27:02.208990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.118 [2024-07-15 22:27:02.208995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.118 [2024-07-15 22:27:02.208999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.118 [2024-07-15 22:27:02.209010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.118 qpair failed and we were unable to recover it. 00:29:37.118 [2024-07-15 22:27:02.219010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.118 [2024-07-15 22:27:02.219116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.118 [2024-07-15 22:27:02.219132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.118 [2024-07-15 22:27:02.219137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.118 [2024-07-15 22:27:02.219142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.118 [2024-07-15 22:27:02.219153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.118 qpair failed and we were unable to recover it. 00:29:37.118 [2024-07-15 22:27:02.228972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.118 [2024-07-15 22:27:02.229042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.118 [2024-07-15 22:27:02.229054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.118 [2024-07-15 22:27:02.229062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.118 [2024-07-15 22:27:02.229066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.118 [2024-07-15 22:27:02.229077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.118 qpair failed and we were unable to recover it. 00:29:37.118 [2024-07-15 22:27:02.239027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.118 [2024-07-15 22:27:02.239091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.118 [2024-07-15 22:27:02.239103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.118 [2024-07-15 22:27:02.239113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.118 [2024-07-15 22:27:02.239117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.118 [2024-07-15 22:27:02.239132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.118 qpair failed and we were unable to recover it. 00:29:37.118 [2024-07-15 22:27:02.249018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.118 [2024-07-15 22:27:02.249203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.118 [2024-07-15 22:27:02.249216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.118 [2024-07-15 22:27:02.249221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.118 [2024-07-15 22:27:02.249225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.118 [2024-07-15 22:27:02.249236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.118 qpair failed and we were unable to recover it. 00:29:37.118 [2024-07-15 22:27:02.259089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.118 [2024-07-15 22:27:02.259161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.118 [2024-07-15 22:27:02.259172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.118 [2024-07-15 22:27:02.259177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.118 [2024-07-15 22:27:02.259181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.118 [2024-07-15 22:27:02.259192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.118 qpair failed and we were unable to recover it. 00:29:37.118 [2024-07-15 22:27:02.269138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.118 [2024-07-15 22:27:02.269250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.118 [2024-07-15 22:27:02.269262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.118 [2024-07-15 22:27:02.269267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.118 [2024-07-15 22:27:02.269271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.118 [2024-07-15 22:27:02.269282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.118 qpair failed and we were unable to recover it. 00:29:37.118 [2024-07-15 22:27:02.279140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.118 [2024-07-15 22:27:02.279211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.118 [2024-07-15 22:27:02.279224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.118 [2024-07-15 22:27:02.279229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.118 [2024-07-15 22:27:02.279233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.118 [2024-07-15 22:27:02.279244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.118 qpair failed and we were unable to recover it. 00:29:37.118 [2024-07-15 22:27:02.289115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.118 [2024-07-15 22:27:02.289184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.118 [2024-07-15 22:27:02.289196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.118 [2024-07-15 22:27:02.289201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.118 [2024-07-15 22:27:02.289205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.118 [2024-07-15 22:27:02.289216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.118 qpair failed and we were unable to recover it. 00:29:37.118 [2024-07-15 22:27:02.299204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.118 [2024-07-15 22:27:02.299298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.118 [2024-07-15 22:27:02.299310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.118 [2024-07-15 22:27:02.299314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.118 [2024-07-15 22:27:02.299319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.118 [2024-07-15 22:27:02.299329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.118 qpair failed and we were unable to recover it. 00:29:37.118 [2024-07-15 22:27:02.309178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.118 [2024-07-15 22:27:02.309250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.118 [2024-07-15 22:27:02.309262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.118 [2024-07-15 22:27:02.309267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.118 [2024-07-15 22:27:02.309271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.118 [2024-07-15 22:27:02.309282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.118 qpair failed and we were unable to recover it. 00:29:37.118 [2024-07-15 22:27:02.319256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.119 [2024-07-15 22:27:02.319327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.119 [2024-07-15 22:27:02.319343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.119 [2024-07-15 22:27:02.319348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.119 [2024-07-15 22:27:02.319352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.119 [2024-07-15 22:27:02.319363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.119 qpair failed and we were unable to recover it. 00:29:37.119 [2024-07-15 22:27:02.329221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.119 [2024-07-15 22:27:02.329284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.119 [2024-07-15 22:27:02.329295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.119 [2024-07-15 22:27:02.329300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.119 [2024-07-15 22:27:02.329304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.119 [2024-07-15 22:27:02.329315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.119 qpair failed and we were unable to recover it. 00:29:37.119 [2024-07-15 22:27:02.339351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.119 [2024-07-15 22:27:02.339423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.119 [2024-07-15 22:27:02.339435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.119 [2024-07-15 22:27:02.339440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.119 [2024-07-15 22:27:02.339444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.119 [2024-07-15 22:27:02.339455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.119 qpair failed and we were unable to recover it. 00:29:37.119 [2024-07-15 22:27:02.349302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.119 [2024-07-15 22:27:02.349373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.119 [2024-07-15 22:27:02.349385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.119 [2024-07-15 22:27:02.349390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.119 [2024-07-15 22:27:02.349394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.119 [2024-07-15 22:27:02.349404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.119 qpair failed and we were unable to recover it. 00:29:37.119 [2024-07-15 22:27:02.359385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.119 [2024-07-15 22:27:02.359455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.119 [2024-07-15 22:27:02.359467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.119 [2024-07-15 22:27:02.359472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.119 [2024-07-15 22:27:02.359476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.119 [2024-07-15 22:27:02.359491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.119 qpair failed and we were unable to recover it. 00:29:37.119 [2024-07-15 22:27:02.369348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.119 [2024-07-15 22:27:02.369414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.119 [2024-07-15 22:27:02.369426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.119 [2024-07-15 22:27:02.369431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.119 [2024-07-15 22:27:02.369435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.119 [2024-07-15 22:27:02.369446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.119 qpair failed and we were unable to recover it. 00:29:37.119 [2024-07-15 22:27:02.379419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.119 [2024-07-15 22:27:02.379509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.119 [2024-07-15 22:27:02.379521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.119 [2024-07-15 22:27:02.379525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.119 [2024-07-15 22:27:02.379530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.119 [2024-07-15 22:27:02.379540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.119 qpair failed and we were unable to recover it. 00:29:37.119 [2024-07-15 22:27:02.389432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.119 [2024-07-15 22:27:02.389505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.119 [2024-07-15 22:27:02.389517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.119 [2024-07-15 22:27:02.389521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.119 [2024-07-15 22:27:02.389526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.119 [2024-07-15 22:27:02.389536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.119 qpair failed and we were unable to recover it. 00:29:37.119 [2024-07-15 22:27:02.399511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.119 [2024-07-15 22:27:02.399583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.119 [2024-07-15 22:27:02.399596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.119 [2024-07-15 22:27:02.399601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.119 [2024-07-15 22:27:02.399605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.119 [2024-07-15 22:27:02.399616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.119 qpair failed and we were unable to recover it. 00:29:37.119 [2024-07-15 22:27:02.409470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.119 [2024-07-15 22:27:02.409532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.119 [2024-07-15 22:27:02.409546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.119 [2024-07-15 22:27:02.409551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.119 [2024-07-15 22:27:02.409556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.119 [2024-07-15 22:27:02.409566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.119 qpair failed and we were unable to recover it. 00:29:37.119 [2024-07-15 22:27:02.419540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.119 [2024-07-15 22:27:02.419615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.119 [2024-07-15 22:27:02.419627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.119 [2024-07-15 22:27:02.419632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.119 [2024-07-15 22:27:02.419636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.119 [2024-07-15 22:27:02.419646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.119 qpair failed and we were unable to recover it. 00:29:37.119 [2024-07-15 22:27:02.429552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.119 [2024-07-15 22:27:02.429643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.119 [2024-07-15 22:27:02.429654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.119 [2024-07-15 22:27:02.429659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.119 [2024-07-15 22:27:02.429663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.119 [2024-07-15 22:27:02.429674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.119 qpair failed and we were unable to recover it. 00:29:37.119 [2024-07-15 22:27:02.439579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.119 [2024-07-15 22:27:02.439645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.119 [2024-07-15 22:27:02.439657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.119 [2024-07-15 22:27:02.439662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.119 [2024-07-15 22:27:02.439667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.119 [2024-07-15 22:27:02.439677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.119 qpair failed and we were unable to recover it. 00:29:37.379 [2024-07-15 22:27:02.449574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.379 [2024-07-15 22:27:02.449639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.379 [2024-07-15 22:27:02.449651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.379 [2024-07-15 22:27:02.449656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.379 [2024-07-15 22:27:02.449663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.379 [2024-07-15 22:27:02.449674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.379 qpair failed and we were unable to recover it. 00:29:37.379 [2024-07-15 22:27:02.459704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.379 [2024-07-15 22:27:02.459798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.379 [2024-07-15 22:27:02.459810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.379 [2024-07-15 22:27:02.459815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.379 [2024-07-15 22:27:02.459820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.379 [2024-07-15 22:27:02.459831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.379 qpair failed and we were unable to recover it. 00:29:37.379 [2024-07-15 22:27:02.469505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.379 [2024-07-15 22:27:02.469575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.379 [2024-07-15 22:27:02.469588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.379 [2024-07-15 22:27:02.469593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.379 [2024-07-15 22:27:02.469597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.379 [2024-07-15 22:27:02.469609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.379 qpair failed and we were unable to recover it. 00:29:37.379 [2024-07-15 22:27:02.479661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.379 [2024-07-15 22:27:02.479718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.479730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.479735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.479739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.479750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.489642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.489726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.489738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.489743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.489748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.489759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.499705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.499776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.499795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.499800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.499805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.499819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.509751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.509820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.509839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.509845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.509849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.509864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.519741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.519810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.519829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.519835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.519839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.519853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.529781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.529850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.529869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.529876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.529881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.529895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.539783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.539850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.539869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.539875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.539883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.539897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.549832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.549901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.549916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.549922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.549926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.549938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.559932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.559998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.560011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.560016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.560020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.560031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.569885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.569948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.569960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.569965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.569969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.569980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.579950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.580054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.580066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.580071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.580075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.580086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.589937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.590004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.590016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.590021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.590025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.590036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.600065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.600132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.600144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.600149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.600153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.600164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.609991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.610050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.610062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.610067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.610071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.610082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.620014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.620075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.620087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.620092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.620096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.620107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.630058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.630128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.630140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.630148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.630152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.630163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.640063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.640138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.640150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.640155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.640159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.640170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.650057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.650118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.650133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.650138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.650142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.650153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.660163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.380 [2024-07-15 22:27:02.660225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.380 [2024-07-15 22:27:02.660237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.380 [2024-07-15 22:27:02.660241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.380 [2024-07-15 22:27:02.660245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.380 [2024-07-15 22:27:02.660256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.380 qpair failed and we were unable to recover it. 00:29:37.380 [2024-07-15 22:27:02.670167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.381 [2024-07-15 22:27:02.670233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.381 [2024-07-15 22:27:02.670245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.381 [2024-07-15 22:27:02.670250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.381 [2024-07-15 22:27:02.670254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.381 [2024-07-15 22:27:02.670265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.381 qpair failed and we were unable to recover it. 00:29:37.381 [2024-07-15 22:27:02.680170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.381 [2024-07-15 22:27:02.680234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.381 [2024-07-15 22:27:02.680245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.381 [2024-07-15 22:27:02.680250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.381 [2024-07-15 22:27:02.680254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.381 [2024-07-15 22:27:02.680265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.381 qpair failed and we were unable to recover it. 00:29:37.381 [2024-07-15 22:27:02.690179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.381 [2024-07-15 22:27:02.690289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.381 [2024-07-15 22:27:02.690300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.381 [2024-07-15 22:27:02.690305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.381 [2024-07-15 22:27:02.690309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.381 [2024-07-15 22:27:02.690320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.381 qpair failed and we were unable to recover it. 00:29:37.381 [2024-07-15 22:27:02.700299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.381 [2024-07-15 22:27:02.700364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.381 [2024-07-15 22:27:02.700376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.381 [2024-07-15 22:27:02.700381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.381 [2024-07-15 22:27:02.700385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.381 [2024-07-15 22:27:02.700396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.381 qpair failed and we were unable to recover it. 00:29:37.641 [2024-07-15 22:27:02.710264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.641 [2024-07-15 22:27:02.710330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.641 [2024-07-15 22:27:02.710341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.641 [2024-07-15 22:27:02.710346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.641 [2024-07-15 22:27:02.710350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.641 [2024-07-15 22:27:02.710361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.641 qpair failed and we were unable to recover it. 00:29:37.641 [2024-07-15 22:27:02.720285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.641 [2024-07-15 22:27:02.720354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.641 [2024-07-15 22:27:02.720368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.641 [2024-07-15 22:27:02.720373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.641 [2024-07-15 22:27:02.720377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.641 [2024-07-15 22:27:02.720388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.641 qpair failed and we were unable to recover it. 00:29:37.641 [2024-07-15 22:27:02.730328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.641 [2024-07-15 22:27:02.730389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.641 [2024-07-15 22:27:02.730401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.641 [2024-07-15 22:27:02.730406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.641 [2024-07-15 22:27:02.730410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.641 [2024-07-15 22:27:02.730421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.641 qpair failed and we were unable to recover it. 00:29:37.641 [2024-07-15 22:27:02.740356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.641 [2024-07-15 22:27:02.740422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.641 [2024-07-15 22:27:02.740434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.641 [2024-07-15 22:27:02.740438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.641 [2024-07-15 22:27:02.740443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.641 [2024-07-15 22:27:02.740453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.641 qpair failed and we were unable to recover it. 00:29:37.641 [2024-07-15 22:27:02.750385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.641 [2024-07-15 22:27:02.750457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.641 [2024-07-15 22:27:02.750468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.641 [2024-07-15 22:27:02.750473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.641 [2024-07-15 22:27:02.750477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.641 [2024-07-15 22:27:02.750488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.641 qpair failed and we were unable to recover it. 00:29:37.641 [2024-07-15 22:27:02.760411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.641 [2024-07-15 22:27:02.760471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.641 [2024-07-15 22:27:02.760483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.641 [2024-07-15 22:27:02.760487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.641 [2024-07-15 22:27:02.760492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.641 [2024-07-15 22:27:02.760505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.641 qpair failed and we were unable to recover it. 00:29:37.641 [2024-07-15 22:27:02.770442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.641 [2024-07-15 22:27:02.770503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.641 [2024-07-15 22:27:02.770514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.641 [2024-07-15 22:27:02.770519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.641 [2024-07-15 22:27:02.770523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.641 [2024-07-15 22:27:02.770534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.641 qpair failed and we were unable to recover it. 00:29:37.641 [2024-07-15 22:27:02.780511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.641 [2024-07-15 22:27:02.780574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.641 [2024-07-15 22:27:02.780586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.641 [2024-07-15 22:27:02.780591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.641 [2024-07-15 22:27:02.780595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.641 [2024-07-15 22:27:02.780605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.641 qpair failed and we were unable to recover it. 00:29:37.641 [2024-07-15 22:27:02.790484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.641 [2024-07-15 22:27:02.790555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.641 [2024-07-15 22:27:02.790568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.790572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.790576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.790587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.800509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.800571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.800583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.800588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.800592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.800602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.810621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.810682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.810697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.810703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.810707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.810717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.820578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.820641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.820653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.820658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.820662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.820673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.830600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.830671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.830683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.830690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.830696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.830708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.840611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.840671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.840684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.840689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.840693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.840704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.850646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.850714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.850726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.850731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.850738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.850749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.860679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.860742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.860755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.860760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.860764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.860775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.870596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.870672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.870684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.870689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.870693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.870704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.880738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.880799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.880812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.880816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.880820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.880831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.890791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.890886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.890904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.890911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.890915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.890929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.900810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.900878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.900894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.900899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.900903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.900915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.910831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.910936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.910955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.910960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.910965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.910979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.920754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.920822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.920841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.920847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.920852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.920866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.930780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.930844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.930863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.930869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.930873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.930887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.940896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.940961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.940975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.940980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.940988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.940999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.950959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.951026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.951038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.951043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.951047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.951058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.642 [2024-07-15 22:27:02.960955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.642 [2024-07-15 22:27:02.961027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.642 [2024-07-15 22:27:02.961039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.642 [2024-07-15 22:27:02.961044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.642 [2024-07-15 22:27:02.961048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.642 [2024-07-15 22:27:02.961059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.642 qpair failed and we were unable to recover it. 00:29:37.903 [2024-07-15 22:27:02.971007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.903 [2024-07-15 22:27:02.971067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.903 [2024-07-15 22:27:02.971079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.903 [2024-07-15 22:27:02.971084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.903 [2024-07-15 22:27:02.971088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.903 [2024-07-15 22:27:02.971098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.903 qpair failed and we were unable to recover it. 00:29:37.903 [2024-07-15 22:27:02.981137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.903 [2024-07-15 22:27:02.981209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.903 [2024-07-15 22:27:02.981221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.903 [2024-07-15 22:27:02.981225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.903 [2024-07-15 22:27:02.981230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.903 [2024-07-15 22:27:02.981241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.903 qpair failed and we were unable to recover it. 00:29:37.903 [2024-07-15 22:27:02.990925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.903 [2024-07-15 22:27:02.990991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.903 [2024-07-15 22:27:02.991004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.903 [2024-07-15 22:27:02.991009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.903 [2024-07-15 22:27:02.991013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.903 [2024-07-15 22:27:02.991025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.903 qpair failed and we were unable to recover it. 00:29:37.903 [2024-07-15 22:27:03.001067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.903 [2024-07-15 22:27:03.001169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.903 [2024-07-15 22:27:03.001182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.903 [2024-07-15 22:27:03.001187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.903 [2024-07-15 22:27:03.001191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.903 [2024-07-15 22:27:03.001202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.903 qpair failed and we were unable to recover it. 00:29:37.903 [2024-07-15 22:27:03.011195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.903 [2024-07-15 22:27:03.011253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.903 [2024-07-15 22:27:03.011265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.903 [2024-07-15 22:27:03.011270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.903 [2024-07-15 22:27:03.011274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.903 [2024-07-15 22:27:03.011285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.903 qpair failed and we were unable to recover it. 00:29:37.903 [2024-07-15 22:27:03.021154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.903 [2024-07-15 22:27:03.021239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.903 [2024-07-15 22:27:03.021252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.903 [2024-07-15 22:27:03.021256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.903 [2024-07-15 22:27:03.021261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.903 [2024-07-15 22:27:03.021271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.903 qpair failed and we were unable to recover it. 00:29:37.903 [2024-07-15 22:27:03.031040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.903 [2024-07-15 22:27:03.031120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.903 [2024-07-15 22:27:03.031135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.903 [2024-07-15 22:27:03.031143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.903 [2024-07-15 22:27:03.031147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.903 [2024-07-15 22:27:03.031158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.903 qpair failed and we were unable to recover it. 00:29:37.903 [2024-07-15 22:27:03.041343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.903 [2024-07-15 22:27:03.041411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.903 [2024-07-15 22:27:03.041424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.903 [2024-07-15 22:27:03.041429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.903 [2024-07-15 22:27:03.041434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.903 [2024-07-15 22:27:03.041445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.903 qpair failed and we were unable to recover it. 00:29:37.903 [2024-07-15 22:27:03.051238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.903 [2024-07-15 22:27:03.051301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.903 [2024-07-15 22:27:03.051314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.051319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.051323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.051334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.061259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.061369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.061381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.061385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.061390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.061401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.071285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.071467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.071479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.071484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.071488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.071499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.081309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.081371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.081383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.081388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.081392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.081403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.091319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.091380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.091392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.091397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.091401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.091412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.101348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.101413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.101425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.101430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.101434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.101445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.111410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.111493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.111505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.111510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.111515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.111525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.121402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.121464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.121479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.121484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.121488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.121499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.131457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.131520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.131531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.131537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.131541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.131551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.141430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.141539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.141552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.141557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.141561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.141572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.151439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.151504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.151516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.151521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.151525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.151535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.161504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.161567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.161579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.161584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.161588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.161601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.171536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.171599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.171611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.171616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.171620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.171630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.181568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.181631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.181644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.181648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.181652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.181663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.191469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.191540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.191552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.191556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.191561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.191571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.201598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.201708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.201720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.201725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.201729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.201740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.211656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.211766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.211781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.211786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.211790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.211801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:37.904 [2024-07-15 22:27:03.221663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.904 [2024-07-15 22:27:03.221771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.904 [2024-07-15 22:27:03.221783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.904 [2024-07-15 22:27:03.221788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.904 [2024-07-15 22:27:03.221792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:37.904 [2024-07-15 22:27:03.221803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.904 qpair failed and we were unable to recover it. 00:29:38.165 [2024-07-15 22:27:03.231688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.165 [2024-07-15 22:27:03.231758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.165 [2024-07-15 22:27:03.231777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.165 [2024-07-15 22:27:03.231783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.165 [2024-07-15 22:27:03.231788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.165 [2024-07-15 22:27:03.231802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-07-15 22:27:03.241596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.165 [2024-07-15 22:27:03.241657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.165 [2024-07-15 22:27:03.241670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.165 [2024-07-15 22:27:03.241675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.165 [2024-07-15 22:27:03.241680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.165 [2024-07-15 22:27:03.241691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-07-15 22:27:03.251736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.165 [2024-07-15 22:27:03.251798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.165 [2024-07-15 22:27:03.251810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.165 [2024-07-15 22:27:03.251815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.165 [2024-07-15 22:27:03.251819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.165 [2024-07-15 22:27:03.251834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-07-15 22:27:03.261753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.165 [2024-07-15 22:27:03.261823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.165 [2024-07-15 22:27:03.261842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.165 [2024-07-15 22:27:03.261848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.165 [2024-07-15 22:27:03.261852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.165 [2024-07-15 22:27:03.261866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-07-15 22:27:03.271813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.165 [2024-07-15 22:27:03.271888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.165 [2024-07-15 22:27:03.271907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.165 [2024-07-15 22:27:03.271913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.165 [2024-07-15 22:27:03.271917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.165 [2024-07-15 22:27:03.271931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-07-15 22:27:03.281809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.165 [2024-07-15 22:27:03.281882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.165 [2024-07-15 22:27:03.281898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.165 [2024-07-15 22:27:03.281903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.165 [2024-07-15 22:27:03.281907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.165 [2024-07-15 22:27:03.281919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-07-15 22:27:03.291853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.165 [2024-07-15 22:27:03.291917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.165 [2024-07-15 22:27:03.291930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.165 [2024-07-15 22:27:03.291935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.165 [2024-07-15 22:27:03.291939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.165 [2024-07-15 22:27:03.291951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-07-15 22:27:03.301923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.165 [2024-07-15 22:27:03.301993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.165 [2024-07-15 22:27:03.302011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.165 [2024-07-15 22:27:03.302017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.165 [2024-07-15 22:27:03.302022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.165 [2024-07-15 22:27:03.302036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-07-15 22:27:03.311922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.165 [2024-07-15 22:27:03.312007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.165 [2024-07-15 22:27:03.312020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.165 [2024-07-15 22:27:03.312025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.165 [2024-07-15 22:27:03.312029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.165 [2024-07-15 22:27:03.312041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-07-15 22:27:03.321915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.165 [2024-07-15 22:27:03.321977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.165 [2024-07-15 22:27:03.321989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.165 [2024-07-15 22:27:03.321994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.165 [2024-07-15 22:27:03.321998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.165 [2024-07-15 22:27:03.322009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-07-15 22:27:03.331939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.165 [2024-07-15 22:27:03.332000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.165 [2024-07-15 22:27:03.332012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.165 [2024-07-15 22:27:03.332017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.165 [2024-07-15 22:27:03.332021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.165 [2024-07-15 22:27:03.332032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-07-15 22:27:03.341997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.165 [2024-07-15 22:27:03.342061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.165 [2024-07-15 22:27:03.342073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.165 [2024-07-15 22:27:03.342078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.165 [2024-07-15 22:27:03.342085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.165 [2024-07-15 22:27:03.342096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-07-15 22:27:03.352019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.165 [2024-07-15 22:27:03.352085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.165 [2024-07-15 22:27:03.352097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.165 [2024-07-15 22:27:03.352102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.166 [2024-07-15 22:27:03.352106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.166 [2024-07-15 22:27:03.352117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-07-15 22:27:03.362071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.166 [2024-07-15 22:27:03.362178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.166 [2024-07-15 22:27:03.362191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.166 [2024-07-15 22:27:03.362196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.166 [2024-07-15 22:27:03.362200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.166 [2024-07-15 22:27:03.362211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-07-15 22:27:03.372057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.166 [2024-07-15 22:27:03.372146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.166 [2024-07-15 22:27:03.372158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.166 [2024-07-15 22:27:03.372163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.166 [2024-07-15 22:27:03.372168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.166 [2024-07-15 22:27:03.372178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-07-15 22:27:03.382065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.166 [2024-07-15 22:27:03.382133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.166 [2024-07-15 22:27:03.382145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.166 [2024-07-15 22:27:03.382150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.166 [2024-07-15 22:27:03.382154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.166 [2024-07-15 22:27:03.382165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-07-15 22:27:03.392193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.166 [2024-07-15 22:27:03.392306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.166 [2024-07-15 22:27:03.392317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.166 [2024-07-15 22:27:03.392322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.166 [2024-07-15 22:27:03.392326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.166 [2024-07-15 22:27:03.392338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-07-15 22:27:03.402146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.166 [2024-07-15 22:27:03.402208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.166 [2024-07-15 22:27:03.402220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.166 [2024-07-15 22:27:03.402225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.166 [2024-07-15 22:27:03.402229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.166 [2024-07-15 22:27:03.402240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-07-15 22:27:03.412154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.166 [2024-07-15 22:27:03.412216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.166 [2024-07-15 22:27:03.412228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.166 [2024-07-15 22:27:03.412233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.166 [2024-07-15 22:27:03.412237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.166 [2024-07-15 22:27:03.412248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-07-15 22:27:03.422256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.166 [2024-07-15 22:27:03.422327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.166 [2024-07-15 22:27:03.422339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.166 [2024-07-15 22:27:03.422344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.166 [2024-07-15 22:27:03.422348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.166 [2024-07-15 22:27:03.422359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-07-15 22:27:03.432245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.166 [2024-07-15 22:27:03.432314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.166 [2024-07-15 22:27:03.432326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.166 [2024-07-15 22:27:03.432334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.166 [2024-07-15 22:27:03.432338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.166 [2024-07-15 22:27:03.432349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-07-15 22:27:03.442256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.166 [2024-07-15 22:27:03.442317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.166 [2024-07-15 22:27:03.442329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.166 [2024-07-15 22:27:03.442334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.166 [2024-07-15 22:27:03.442338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.166 [2024-07-15 22:27:03.442349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-07-15 22:27:03.452286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.166 [2024-07-15 22:27:03.452396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.166 [2024-07-15 22:27:03.452408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.166 [2024-07-15 22:27:03.452412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.166 [2024-07-15 22:27:03.452416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.166 [2024-07-15 22:27:03.452427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-07-15 22:27:03.462337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.166 [2024-07-15 22:27:03.462440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.166 [2024-07-15 22:27:03.462452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.166 [2024-07-15 22:27:03.462457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.166 [2024-07-15 22:27:03.462461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.166 [2024-07-15 22:27:03.462471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-07-15 22:27:03.472255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.166 [2024-07-15 22:27:03.472321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.166 [2024-07-15 22:27:03.472333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.166 [2024-07-15 22:27:03.472338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.166 [2024-07-15 22:27:03.472342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.166 [2024-07-15 22:27:03.472352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-07-15 22:27:03.482360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.166 [2024-07-15 22:27:03.482427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.166 [2024-07-15 22:27:03.482439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.166 [2024-07-15 22:27:03.482443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.166 [2024-07-15 22:27:03.482447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.166 [2024-07-15 22:27:03.482458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.492279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.492347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.492360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.492365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.492369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.492380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.426 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.502410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.502477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.502489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.502494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.502498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.502509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.426 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.512476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.512543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.512555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.512560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.512564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.512575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.426 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.522454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.522513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.522525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.522533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.522537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.522549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.426 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.532486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.532547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.532559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.532564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.532568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.532579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.426 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.542508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.542568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.542580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.542585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.542589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.542600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.426 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.552610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.552722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.552734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.552739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.552743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.552754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.426 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.562516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.562582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.562601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.562607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.562612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.562626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.426 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.572583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.572649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.572661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.572667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.572671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.572682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.426 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.582622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.582695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.582707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.582712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.582716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.582727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.426 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.592646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.592714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.592726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.592731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.592735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.592746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.426 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.602731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.602790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.602802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.602807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.602811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.602823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.426 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.612728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.612789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.612805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.612810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.612814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.612825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.426 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.622777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.622843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.622855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.622860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.622864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.622875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.426 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.632750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.632817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.632828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.632833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.632838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.632848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.426 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.642802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.642867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.642880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.642884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.642889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.642899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.426 qpair failed and we were unable to recover it. 00:29:38.426 [2024-07-15 22:27:03.652822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.426 [2024-07-15 22:27:03.652886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.426 [2024-07-15 22:27:03.652898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.426 [2024-07-15 22:27:03.652903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.426 [2024-07-15 22:27:03.652907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.426 [2024-07-15 22:27:03.652921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.427 qpair failed and we were unable to recover it. 00:29:38.427 [2024-07-15 22:27:03.662844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.427 [2024-07-15 22:27:03.662907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.427 [2024-07-15 22:27:03.662919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.427 [2024-07-15 22:27:03.662924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.427 [2024-07-15 22:27:03.662928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.427 [2024-07-15 22:27:03.662939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.427 qpair failed and we were unable to recover it. 00:29:38.427 [2024-07-15 22:27:03.672885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.427 [2024-07-15 22:27:03.672951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.427 [2024-07-15 22:27:03.672962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.427 [2024-07-15 22:27:03.672967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.427 [2024-07-15 22:27:03.672971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.427 [2024-07-15 22:27:03.672982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.427 qpair failed and we were unable to recover it. 00:29:38.427 [2024-07-15 22:27:03.682909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.427 [2024-07-15 22:27:03.682970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.427 [2024-07-15 22:27:03.682982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.427 [2024-07-15 22:27:03.682987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.427 [2024-07-15 22:27:03.682991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.427 [2024-07-15 22:27:03.683002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.427 qpair failed and we were unable to recover it. 00:29:38.427 [2024-07-15 22:27:03.692943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.427 [2024-07-15 22:27:03.693008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.427 [2024-07-15 22:27:03.693020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.427 [2024-07-15 22:27:03.693024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.427 [2024-07-15 22:27:03.693028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.427 [2024-07-15 22:27:03.693039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.427 qpair failed and we were unable to recover it. 00:29:38.427 [2024-07-15 22:27:03.702937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.427 [2024-07-15 22:27:03.702999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.427 [2024-07-15 22:27:03.703015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.427 [2024-07-15 22:27:03.703020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.427 [2024-07-15 22:27:03.703024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.427 [2024-07-15 22:27:03.703035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.427 qpair failed and we were unable to recover it. 00:29:38.427 [2024-07-15 22:27:03.712997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.427 [2024-07-15 22:27:03.713098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.427 [2024-07-15 22:27:03.713110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.427 [2024-07-15 22:27:03.713115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.427 [2024-07-15 22:27:03.713119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.427 [2024-07-15 22:27:03.713134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.427 qpair failed and we were unable to recover it. 00:29:38.427 [2024-07-15 22:27:03.722996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.427 [2024-07-15 22:27:03.723062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.427 [2024-07-15 22:27:03.723074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.427 [2024-07-15 22:27:03.723079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.427 [2024-07-15 22:27:03.723083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.427 [2024-07-15 22:27:03.723094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.427 qpair failed and we were unable to recover it. 00:29:38.427 [2024-07-15 22:27:03.733017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.427 [2024-07-15 22:27:03.733076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.427 [2024-07-15 22:27:03.733088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.427 [2024-07-15 22:27:03.733093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.427 [2024-07-15 22:27:03.733097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.427 [2024-07-15 22:27:03.733108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.427 qpair failed and we were unable to recover it. 00:29:38.427 [2024-07-15 22:27:03.742942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.427 [2024-07-15 22:27:03.743053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.427 [2024-07-15 22:27:03.743066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.427 [2024-07-15 22:27:03.743071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.427 [2024-07-15 22:27:03.743078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.427 [2024-07-15 22:27:03.743090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.427 qpair failed and we were unable to recover it. 00:29:38.688 [2024-07-15 22:27:03.753086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.688 [2024-07-15 22:27:03.753174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.688 [2024-07-15 22:27:03.753187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.688 [2024-07-15 22:27:03.753191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.688 [2024-07-15 22:27:03.753196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.688 [2024-07-15 22:27:03.753206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.688 qpair failed and we were unable to recover it. 00:29:38.688 [2024-07-15 22:27:03.763105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.688 [2024-07-15 22:27:03.763195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.688 [2024-07-15 22:27:03.763206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.688 [2024-07-15 22:27:03.763211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.688 [2024-07-15 22:27:03.763215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.688 [2024-07-15 22:27:03.763226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.688 qpair failed and we were unable to recover it. 00:29:38.688 [2024-07-15 22:27:03.773116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.688 [2024-07-15 22:27:03.773380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.688 [2024-07-15 22:27:03.773394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.688 [2024-07-15 22:27:03.773398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.688 [2024-07-15 22:27:03.773403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.688 [2024-07-15 22:27:03.773413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.688 qpair failed and we were unable to recover it. 00:29:38.688 [2024-07-15 22:27:03.783212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.688 [2024-07-15 22:27:03.783285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.688 [2024-07-15 22:27:03.783297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.688 [2024-07-15 22:27:03.783302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.688 [2024-07-15 22:27:03.783306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.688 [2024-07-15 22:27:03.783317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.688 qpair failed and we were unable to recover it. 00:29:38.688 [2024-07-15 22:27:03.793186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.688 [2024-07-15 22:27:03.793289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.688 [2024-07-15 22:27:03.793301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.688 [2024-07-15 22:27:03.793306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.688 [2024-07-15 22:27:03.793310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.688 [2024-07-15 22:27:03.793321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.688 qpair failed and we were unable to recover it. 00:29:38.688 [2024-07-15 22:27:03.803159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.688 [2024-07-15 22:27:03.803219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.688 [2024-07-15 22:27:03.803231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.688 [2024-07-15 22:27:03.803236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.688 [2024-07-15 22:27:03.803240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.688 [2024-07-15 22:27:03.803251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.688 qpair failed and we were unable to recover it. 00:29:38.688 [2024-07-15 22:27:03.813257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.688 [2024-07-15 22:27:03.813322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.688 [2024-07-15 22:27:03.813334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.688 [2024-07-15 22:27:03.813339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.688 [2024-07-15 22:27:03.813343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.688 [2024-07-15 22:27:03.813354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.688 qpair failed and we were unable to recover it. 00:29:38.688 [2024-07-15 22:27:03.823256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.688 [2024-07-15 22:27:03.823318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.688 [2024-07-15 22:27:03.823330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.688 [2024-07-15 22:27:03.823335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.688 [2024-07-15 22:27:03.823339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.688 [2024-07-15 22:27:03.823350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.688 qpair failed and we were unable to recover it. 00:29:38.688 [2024-07-15 22:27:03.833308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.688 [2024-07-15 22:27:03.833375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.688 [2024-07-15 22:27:03.833387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.688 [2024-07-15 22:27:03.833395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.689 [2024-07-15 22:27:03.833399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.689 [2024-07-15 22:27:03.833409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.689 qpair failed and we were unable to recover it. 00:29:38.689 [2024-07-15 22:27:03.843317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.689 [2024-07-15 22:27:03.843402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.689 [2024-07-15 22:27:03.843414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.689 [2024-07-15 22:27:03.843418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.689 [2024-07-15 22:27:03.843423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.689 [2024-07-15 22:27:03.843433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.689 qpair failed and we were unable to recover it. 00:29:38.689 [2024-07-15 22:27:03.853389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.689 [2024-07-15 22:27:03.853453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.689 [2024-07-15 22:27:03.853465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.689 [2024-07-15 22:27:03.853470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.689 [2024-07-15 22:27:03.853474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.689 [2024-07-15 22:27:03.853484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.689 qpair failed and we were unable to recover it. 00:29:38.689 [2024-07-15 22:27:03.863398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.689 [2024-07-15 22:27:03.863461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.689 [2024-07-15 22:27:03.863473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.689 [2024-07-15 22:27:03.863478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.689 [2024-07-15 22:27:03.863482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.689 [2024-07-15 22:27:03.863493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.689 qpair failed and we were unable to recover it. 00:29:38.689 [2024-07-15 22:27:03.873410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.689 [2024-07-15 22:27:03.873518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.689 [2024-07-15 22:27:03.873530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.689 [2024-07-15 22:27:03.873535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.689 [2024-07-15 22:27:03.873539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.689 [2024-07-15 22:27:03.873550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.689 qpair failed and we were unable to recover it. 00:29:38.689 [2024-07-15 22:27:03.883456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.689 [2024-07-15 22:27:03.883518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.689 [2024-07-15 22:27:03.883529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.689 [2024-07-15 22:27:03.883535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.689 [2024-07-15 22:27:03.883539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.689 [2024-07-15 22:27:03.883549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.689 qpair failed and we were unable to recover it. 00:29:38.689 [2024-07-15 22:27:03.893463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.689 [2024-07-15 22:27:03.893533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.689 [2024-07-15 22:27:03.893544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.689 [2024-07-15 22:27:03.893549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.689 [2024-07-15 22:27:03.893553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.689 [2024-07-15 22:27:03.893564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.689 qpair failed and we were unable to recover it. 00:29:38.689 [2024-07-15 22:27:03.903487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.689 [2024-07-15 22:27:03.903553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.689 [2024-07-15 22:27:03.903565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.689 [2024-07-15 22:27:03.903570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.689 [2024-07-15 22:27:03.903574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.689 [2024-07-15 22:27:03.903584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.689 qpair failed and we were unable to recover it. 00:29:38.689 [2024-07-15 22:27:03.913513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.689 [2024-07-15 22:27:03.913592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.689 [2024-07-15 22:27:03.913604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.689 [2024-07-15 22:27:03.913609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.689 [2024-07-15 22:27:03.913613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.689 [2024-07-15 22:27:03.913624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.689 qpair failed and we were unable to recover it. 00:29:38.689 [2024-07-15 22:27:03.923664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.689 [2024-07-15 22:27:03.923835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.689 [2024-07-15 22:27:03.923847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.689 [2024-07-15 22:27:03.923854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.689 [2024-07-15 22:27:03.923858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.689 [2024-07-15 22:27:03.923869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.689 qpair failed and we were unable to recover it. 00:29:38.689 [2024-07-15 22:27:03.933614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.689 [2024-07-15 22:27:03.933684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.689 [2024-07-15 22:27:03.933703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.689 [2024-07-15 22:27:03.933709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.689 [2024-07-15 22:27:03.933713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.689 [2024-07-15 22:27:03.933727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.689 qpair failed and we were unable to recover it. 00:29:38.689 [2024-07-15 22:27:03.943560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.689 [2024-07-15 22:27:03.943627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.689 [2024-07-15 22:27:03.943646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.689 [2024-07-15 22:27:03.943652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.689 [2024-07-15 22:27:03.943657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.689 [2024-07-15 22:27:03.943670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.689 qpair failed and we were unable to recover it. 00:29:38.689 [2024-07-15 22:27:03.953619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.689 [2024-07-15 22:27:03.953693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.689 [2024-07-15 22:27:03.953706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.689 [2024-07-15 22:27:03.953711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.689 [2024-07-15 22:27:03.953715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.689 [2024-07-15 22:27:03.953726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.689 qpair failed and we were unable to recover it. 00:29:38.689 [2024-07-15 22:27:03.963714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.689 [2024-07-15 22:27:03.963789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.689 [2024-07-15 22:27:03.963802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.689 [2024-07-15 22:27:03.963807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.689 [2024-07-15 22:27:03.963811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.689 [2024-07-15 22:27:03.963822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.689 qpair failed and we were unable to recover it. 00:29:38.689 [2024-07-15 22:27:03.973682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.689 [2024-07-15 22:27:03.973776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.689 [2024-07-15 22:27:03.973795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.689 [2024-07-15 22:27:03.973801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.689 [2024-07-15 22:27:03.973806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.690 [2024-07-15 22:27:03.973819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.690 qpair failed and we were unable to recover it. 00:29:38.690 [2024-07-15 22:27:03.983821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.690 [2024-07-15 22:27:03.983890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.690 [2024-07-15 22:27:03.983909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.690 [2024-07-15 22:27:03.983915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.690 [2024-07-15 22:27:03.983919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.690 [2024-07-15 22:27:03.983934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.690 qpair failed and we were unable to recover it. 00:29:38.690 [2024-07-15 22:27:03.993778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.690 [2024-07-15 22:27:03.993846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.690 [2024-07-15 22:27:03.993859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.690 [2024-07-15 22:27:03.993864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.690 [2024-07-15 22:27:03.993868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.690 [2024-07-15 22:27:03.993880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.690 qpair failed and we were unable to recover it. 00:29:38.690 [2024-07-15 22:27:04.003761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.690 [2024-07-15 22:27:04.003836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.690 [2024-07-15 22:27:04.003855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.690 [2024-07-15 22:27:04.003861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.690 [2024-07-15 22:27:04.003865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.690 [2024-07-15 22:27:04.003879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.690 qpair failed and we were unable to recover it. 00:29:38.951 [2024-07-15 22:27:04.013676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.951 [2024-07-15 22:27:04.013744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.951 [2024-07-15 22:27:04.013766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.951 [2024-07-15 22:27:04.013773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.951 [2024-07-15 22:27:04.013777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.951 [2024-07-15 22:27:04.013791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-07-15 22:27:04.023812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.951 [2024-07-15 22:27:04.023885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.951 [2024-07-15 22:27:04.023898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.951 [2024-07-15 22:27:04.023903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.951 [2024-07-15 22:27:04.023908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.951 [2024-07-15 22:27:04.023919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-07-15 22:27:04.033854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.951 [2024-07-15 22:27:04.033948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.951 [2024-07-15 22:27:04.033960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.951 [2024-07-15 22:27:04.033965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.951 [2024-07-15 22:27:04.033969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.951 [2024-07-15 22:27:04.033980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-07-15 22:27:04.043969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.951 [2024-07-15 22:27:04.044033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.951 [2024-07-15 22:27:04.044045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.951 [2024-07-15 22:27:04.044050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.951 [2024-07-15 22:27:04.044054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.951 [2024-07-15 22:27:04.044065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-07-15 22:27:04.053905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.951 [2024-07-15 22:27:04.053967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.951 [2024-07-15 22:27:04.053979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.951 [2024-07-15 22:27:04.053984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.951 [2024-07-15 22:27:04.053988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.951 [2024-07-15 22:27:04.054002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-07-15 22:27:04.063938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.951 [2024-07-15 22:27:04.064044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.951 [2024-07-15 22:27:04.064056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.951 [2024-07-15 22:27:04.064061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.951 [2024-07-15 22:27:04.064065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.951 [2024-07-15 22:27:04.064076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-07-15 22:27:04.073902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.951 [2024-07-15 22:27:04.073976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.951 [2024-07-15 22:27:04.073987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.951 [2024-07-15 22:27:04.073992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.951 [2024-07-15 22:27:04.073996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.951 [2024-07-15 22:27:04.074007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-07-15 22:27:04.083893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.951 [2024-07-15 22:27:04.083979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.951 [2024-07-15 22:27:04.083991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.951 [2024-07-15 22:27:04.083996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.951 [2024-07-15 22:27:04.084000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.952 [2024-07-15 22:27:04.084011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-07-15 22:27:04.093997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.952 [2024-07-15 22:27:04.094057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.952 [2024-07-15 22:27:04.094069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.952 [2024-07-15 22:27:04.094074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.952 [2024-07-15 22:27:04.094078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.952 [2024-07-15 22:27:04.094089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-07-15 22:27:04.104035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.952 [2024-07-15 22:27:04.104110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.952 [2024-07-15 22:27:04.104128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.952 [2024-07-15 22:27:04.104134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.952 [2024-07-15 22:27:04.104138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.952 [2024-07-15 22:27:04.104149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-07-15 22:27:04.114057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.952 [2024-07-15 22:27:04.114141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.952 [2024-07-15 22:27:04.114153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.952 [2024-07-15 22:27:04.114158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.952 [2024-07-15 22:27:04.114162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.952 [2024-07-15 22:27:04.114173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-07-15 22:27:04.124079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.952 [2024-07-15 22:27:04.124147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.952 [2024-07-15 22:27:04.124159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.952 [2024-07-15 22:27:04.124164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.952 [2024-07-15 22:27:04.124168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.952 [2024-07-15 22:27:04.124179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-07-15 22:27:04.134105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.952 [2024-07-15 22:27:04.134172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.952 [2024-07-15 22:27:04.134184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.952 [2024-07-15 22:27:04.134189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.952 [2024-07-15 22:27:04.134193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.952 [2024-07-15 22:27:04.134204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-07-15 22:27:04.144157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.952 [2024-07-15 22:27:04.144264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.952 [2024-07-15 22:27:04.144275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.952 [2024-07-15 22:27:04.144280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.952 [2024-07-15 22:27:04.144287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.952 [2024-07-15 22:27:04.144298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-07-15 22:27:04.154076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.952 [2024-07-15 22:27:04.154187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.952 [2024-07-15 22:27:04.154199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.952 [2024-07-15 22:27:04.154204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.952 [2024-07-15 22:27:04.154208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.952 [2024-07-15 22:27:04.154219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-07-15 22:27:04.164219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.952 [2024-07-15 22:27:04.164296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.952 [2024-07-15 22:27:04.164308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.952 [2024-07-15 22:27:04.164313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.952 [2024-07-15 22:27:04.164317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.952 [2024-07-15 22:27:04.164328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-07-15 22:27:04.174218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.952 [2024-07-15 22:27:04.174280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.952 [2024-07-15 22:27:04.174293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.952 [2024-07-15 22:27:04.174298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.952 [2024-07-15 22:27:04.174302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.952 [2024-07-15 22:27:04.174313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-07-15 22:27:04.184280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.952 [2024-07-15 22:27:04.184342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.952 [2024-07-15 22:27:04.184354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.952 [2024-07-15 22:27:04.184359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.952 [2024-07-15 22:27:04.184363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.952 [2024-07-15 22:27:04.184374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-07-15 22:27:04.194309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.952 [2024-07-15 22:27:04.194380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.952 [2024-07-15 22:27:04.194392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.952 [2024-07-15 22:27:04.194397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.952 [2024-07-15 22:27:04.194401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.952 [2024-07-15 22:27:04.194411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-07-15 22:27:04.204190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.952 [2024-07-15 22:27:04.204255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.952 [2024-07-15 22:27:04.204267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.952 [2024-07-15 22:27:04.204272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.952 [2024-07-15 22:27:04.204277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.952 [2024-07-15 22:27:04.204288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-07-15 22:27:04.214375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.952 [2024-07-15 22:27:04.214456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.952 [2024-07-15 22:27:04.214468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.952 [2024-07-15 22:27:04.214473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.952 [2024-07-15 22:27:04.214477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.952 [2024-07-15 22:27:04.214488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-07-15 22:27:04.224358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.952 [2024-07-15 22:27:04.224455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.952 [2024-07-15 22:27:04.224467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.952 [2024-07-15 22:27:04.224471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.952 [2024-07-15 22:27:04.224475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.952 [2024-07-15 22:27:04.224486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.953 qpair failed and we were unable to recover it. 00:29:38.953 [2024-07-15 22:27:04.234367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.953 [2024-07-15 22:27:04.234432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.953 [2024-07-15 22:27:04.234444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.953 [2024-07-15 22:27:04.234450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.953 [2024-07-15 22:27:04.234456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.953 [2024-07-15 22:27:04.234467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.953 qpair failed and we were unable to recover it. 00:29:38.953 [2024-07-15 22:27:04.244489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.953 [2024-07-15 22:27:04.244603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.953 [2024-07-15 22:27:04.244615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.953 [2024-07-15 22:27:04.244620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.953 [2024-07-15 22:27:04.244624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.953 [2024-07-15 22:27:04.244635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.953 qpair failed and we were unable to recover it. 00:29:38.953 [2024-07-15 22:27:04.254401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.953 [2024-07-15 22:27:04.254464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.953 [2024-07-15 22:27:04.254476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.953 [2024-07-15 22:27:04.254481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.953 [2024-07-15 22:27:04.254485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.953 [2024-07-15 22:27:04.254495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.953 qpair failed and we were unable to recover it. 00:29:38.953 [2024-07-15 22:27:04.264473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.953 [2024-07-15 22:27:04.264534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.953 [2024-07-15 22:27:04.264545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.953 [2024-07-15 22:27:04.264550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.953 [2024-07-15 22:27:04.264554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:38.953 [2024-07-15 22:27:04.264565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.953 qpair failed and we were unable to recover it. 00:29:39.216 [2024-07-15 22:27:04.274517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.216 [2024-07-15 22:27:04.274584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.216 [2024-07-15 22:27:04.274596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.216 [2024-07-15 22:27:04.274601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.216 [2024-07-15 22:27:04.274605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.216 [2024-07-15 22:27:04.274615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.216 qpair failed and we were unable to recover it. 00:29:39.216 [2024-07-15 22:27:04.284544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.216 [2024-07-15 22:27:04.284609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.216 [2024-07-15 22:27:04.284621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.216 [2024-07-15 22:27:04.284626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.216 [2024-07-15 22:27:04.284630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.216 [2024-07-15 22:27:04.284640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.216 qpair failed and we were unable to recover it. 00:29:39.216 [2024-07-15 22:27:04.294530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.216 [2024-07-15 22:27:04.294601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.216 [2024-07-15 22:27:04.294613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.216 [2024-07-15 22:27:04.294618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.216 [2024-07-15 22:27:04.294622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.216 [2024-07-15 22:27:04.294632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.216 qpair failed and we were unable to recover it. 00:29:39.216 [2024-07-15 22:27:04.304524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.216 [2024-07-15 22:27:04.304586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.216 [2024-07-15 22:27:04.304598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.216 [2024-07-15 22:27:04.304602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.216 [2024-07-15 22:27:04.304606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.216 [2024-07-15 22:27:04.304616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.216 qpair failed and we were unable to recover it. 00:29:39.216 [2024-07-15 22:27:04.314588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.216 [2024-07-15 22:27:04.314658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.216 [2024-07-15 22:27:04.314670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.216 [2024-07-15 22:27:04.314675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.216 [2024-07-15 22:27:04.314679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.216 [2024-07-15 22:27:04.314689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.216 qpair failed and we were unable to recover it. 00:29:39.216 [2024-07-15 22:27:04.324621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.216 [2024-07-15 22:27:04.324683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.216 [2024-07-15 22:27:04.324695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.216 [2024-07-15 22:27:04.324702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.216 [2024-07-15 22:27:04.324706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.216 [2024-07-15 22:27:04.324717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.216 qpair failed and we were unable to recover it. 00:29:39.216 [2024-07-15 22:27:04.334561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.216 [2024-07-15 22:27:04.334624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.216 [2024-07-15 22:27:04.334637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.216 [2024-07-15 22:27:04.334642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.216 [2024-07-15 22:27:04.334646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.216 [2024-07-15 22:27:04.334657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.216 qpair failed and we were unable to recover it. 00:29:39.216 [2024-07-15 22:27:04.344669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.216 [2024-07-15 22:27:04.344734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.216 [2024-07-15 22:27:04.344746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.216 [2024-07-15 22:27:04.344751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.216 [2024-07-15 22:27:04.344755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.216 [2024-07-15 22:27:04.344765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.216 qpair failed and we were unable to recover it. 00:29:39.216 [2024-07-15 22:27:04.354699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.216 [2024-07-15 22:27:04.354768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.216 [2024-07-15 22:27:04.354787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.216 [2024-07-15 22:27:04.354793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.216 [2024-07-15 22:27:04.354797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.216 [2024-07-15 22:27:04.354811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.216 qpair failed and we were unable to recover it. 00:29:39.216 [2024-07-15 22:27:04.364759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.216 [2024-07-15 22:27:04.364823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.216 [2024-07-15 22:27:04.364842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.216 [2024-07-15 22:27:04.364848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.216 [2024-07-15 22:27:04.364853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.216 [2024-07-15 22:27:04.364867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.216 qpair failed and we were unable to recover it. 00:29:39.216 [2024-07-15 22:27:04.374749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.216 [2024-07-15 22:27:04.374814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.216 [2024-07-15 22:27:04.374828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.216 [2024-07-15 22:27:04.374835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.216 [2024-07-15 22:27:04.374839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.216 [2024-07-15 22:27:04.374851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.216 qpair failed and we were unable to recover it. 00:29:39.216 [2024-07-15 22:27:04.384776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.216 [2024-07-15 22:27:04.384840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.216 [2024-07-15 22:27:04.384853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.216 [2024-07-15 22:27:04.384858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.216 [2024-07-15 22:27:04.384862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.216 [2024-07-15 22:27:04.384873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.216 qpair failed and we were unable to recover it. 00:29:39.216 [2024-07-15 22:27:04.394837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.216 [2024-07-15 22:27:04.394927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.216 [2024-07-15 22:27:04.394945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.216 [2024-07-15 22:27:04.394952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.216 [2024-07-15 22:27:04.394956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.216 [2024-07-15 22:27:04.394970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.216 qpair failed and we were unable to recover it. 00:29:39.216 [2024-07-15 22:27:04.404823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.216 [2024-07-15 22:27:04.404889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.216 [2024-07-15 22:27:04.404908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.217 [2024-07-15 22:27:04.404914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.217 [2024-07-15 22:27:04.404919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.217 [2024-07-15 22:27:04.404933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.217 qpair failed and we were unable to recover it. 00:29:39.217 [2024-07-15 22:27:04.414855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.217 [2024-07-15 22:27:04.414952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.217 [2024-07-15 22:27:04.414974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.217 [2024-07-15 22:27:04.414980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.217 [2024-07-15 22:27:04.414985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.217 [2024-07-15 22:27:04.414999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.217 qpair failed and we were unable to recover it. 00:29:39.217 [2024-07-15 22:27:04.424874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.217 [2024-07-15 22:27:04.424937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.217 [2024-07-15 22:27:04.424951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.217 [2024-07-15 22:27:04.424956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.217 [2024-07-15 22:27:04.424960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.217 [2024-07-15 22:27:04.424972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.217 qpair failed and we were unable to recover it. 00:29:39.217 [2024-07-15 22:27:04.434901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.217 [2024-07-15 22:27:04.434968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.217 [2024-07-15 22:27:04.434981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.217 [2024-07-15 22:27:04.434986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.217 [2024-07-15 22:27:04.434990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.217 [2024-07-15 22:27:04.435001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.217 qpair failed and we were unable to recover it. 00:29:39.217 [2024-07-15 22:27:04.444811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.217 [2024-07-15 22:27:04.444870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.217 [2024-07-15 22:27:04.444882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.217 [2024-07-15 22:27:04.444887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.217 [2024-07-15 22:27:04.444891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.217 [2024-07-15 22:27:04.444902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.217 qpair failed and we were unable to recover it. 00:29:39.217 [2024-07-15 22:27:04.454986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.217 [2024-07-15 22:27:04.455045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.217 [2024-07-15 22:27:04.455057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.217 [2024-07-15 22:27:04.455062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.217 [2024-07-15 22:27:04.455066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.217 [2024-07-15 22:27:04.455082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.217 qpair failed and we were unable to recover it. 00:29:39.217 [2024-07-15 22:27:04.464986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.217 [2024-07-15 22:27:04.465083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.217 [2024-07-15 22:27:04.465095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.217 [2024-07-15 22:27:04.465100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.217 [2024-07-15 22:27:04.465104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.217 [2024-07-15 22:27:04.465114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.217 qpair failed and we were unable to recover it. 00:29:39.217 [2024-07-15 22:27:04.475053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.217 [2024-07-15 22:27:04.475135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.217 [2024-07-15 22:27:04.475147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.217 [2024-07-15 22:27:04.475152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.217 [2024-07-15 22:27:04.475156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.217 [2024-07-15 22:27:04.475167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.217 qpair failed and we were unable to recover it. 00:29:39.217 [2024-07-15 22:27:04.485036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.217 [2024-07-15 22:27:04.485099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.217 [2024-07-15 22:27:04.485111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.217 [2024-07-15 22:27:04.485116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.217 [2024-07-15 22:27:04.485120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.217 [2024-07-15 22:27:04.485135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.217 qpair failed and we were unable to recover it. 00:29:39.217 [2024-07-15 22:27:04.495065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.217 [2024-07-15 22:27:04.495137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.217 [2024-07-15 22:27:04.495149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.217 [2024-07-15 22:27:04.495154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.217 [2024-07-15 22:27:04.495158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.217 [2024-07-15 22:27:04.495169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.217 qpair failed and we were unable to recover it. 00:29:39.217 [2024-07-15 22:27:04.505049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.217 [2024-07-15 22:27:04.505111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.217 [2024-07-15 22:27:04.505130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.217 [2024-07-15 22:27:04.505135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.217 [2024-07-15 22:27:04.505139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.217 [2024-07-15 22:27:04.505150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.217 qpair failed and we were unable to recover it. 00:29:39.217 [2024-07-15 22:27:04.515166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.217 [2024-07-15 22:27:04.515234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.217 [2024-07-15 22:27:04.515246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.217 [2024-07-15 22:27:04.515251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.217 [2024-07-15 22:27:04.515255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.217 [2024-07-15 22:27:04.515265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.217 qpair failed and we were unable to recover it. 00:29:39.217 [2024-07-15 22:27:04.525159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.217 [2024-07-15 22:27:04.525221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.217 [2024-07-15 22:27:04.525233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.217 [2024-07-15 22:27:04.525238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.217 [2024-07-15 22:27:04.525242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.217 [2024-07-15 22:27:04.525252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.217 qpair failed and we were unable to recover it. 00:29:39.217 [2024-07-15 22:27:04.535214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.218 [2024-07-15 22:27:04.535277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.218 [2024-07-15 22:27:04.535289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.218 [2024-07-15 22:27:04.535294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.218 [2024-07-15 22:27:04.535298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.218 [2024-07-15 22:27:04.535309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.218 qpair failed and we were unable to recover it. 00:29:39.480 [2024-07-15 22:27:04.545226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.480 [2024-07-15 22:27:04.545290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.480 [2024-07-15 22:27:04.545302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.480 [2024-07-15 22:27:04.545307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.480 [2024-07-15 22:27:04.545314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.480 [2024-07-15 22:27:04.545325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.480 qpair failed and we were unable to recover it. 00:29:39.480 [2024-07-15 22:27:04.555220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.480 [2024-07-15 22:27:04.555290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.480 [2024-07-15 22:27:04.555302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.480 [2024-07-15 22:27:04.555307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.480 [2024-07-15 22:27:04.555311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.480 [2024-07-15 22:27:04.555322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.480 qpair failed and we were unable to recover it. 00:29:39.480 [2024-07-15 22:27:04.565262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.480 [2024-07-15 22:27:04.565324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.480 [2024-07-15 22:27:04.565335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.481 [2024-07-15 22:27:04.565340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.481 [2024-07-15 22:27:04.565344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.481 [2024-07-15 22:27:04.565355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.481 qpair failed and we were unable to recover it. 00:29:39.481 [2024-07-15 22:27:04.575319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.481 [2024-07-15 22:27:04.575423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.481 [2024-07-15 22:27:04.575435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.481 [2024-07-15 22:27:04.575441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.481 [2024-07-15 22:27:04.575445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.481 [2024-07-15 22:27:04.575456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.481 qpair failed and we were unable to recover it. 00:29:39.481 [2024-07-15 22:27:04.585324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.481 [2024-07-15 22:27:04.585387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.481 [2024-07-15 22:27:04.585399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.481 [2024-07-15 22:27:04.585404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.481 [2024-07-15 22:27:04.585408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.481 [2024-07-15 22:27:04.585419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.481 qpair failed and we were unable to recover it. 00:29:39.481 [2024-07-15 22:27:04.595255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.481 [2024-07-15 22:27:04.595326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.481 [2024-07-15 22:27:04.595338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.481 [2024-07-15 22:27:04.595343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.481 [2024-07-15 22:27:04.595347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.481 [2024-07-15 22:27:04.595357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.481 qpair failed and we were unable to recover it. 00:29:39.481 [2024-07-15 22:27:04.605380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.481 [2024-07-15 22:27:04.605445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.481 [2024-07-15 22:27:04.605457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.481 [2024-07-15 22:27:04.605463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.481 [2024-07-15 22:27:04.605468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.481 [2024-07-15 22:27:04.605480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.481 qpair failed and we were unable to recover it. 00:29:39.481 [2024-07-15 22:27:04.615400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.481 [2024-07-15 22:27:04.615461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.481 [2024-07-15 22:27:04.615473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.481 [2024-07-15 22:27:04.615478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.481 [2024-07-15 22:27:04.615482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.481 [2024-07-15 22:27:04.615492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.481 qpair failed and we were unable to recover it. 00:29:39.481 [2024-07-15 22:27:04.625428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.481 [2024-07-15 22:27:04.625496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.481 [2024-07-15 22:27:04.625508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.481 [2024-07-15 22:27:04.625513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.481 [2024-07-15 22:27:04.625517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.481 [2024-07-15 22:27:04.625527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.481 qpair failed and we were unable to recover it. 00:29:39.481 [2024-07-15 22:27:04.635455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.481 [2024-07-15 22:27:04.635523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.481 [2024-07-15 22:27:04.635535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.481 [2024-07-15 22:27:04.635540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.481 [2024-07-15 22:27:04.635548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.481 [2024-07-15 22:27:04.635558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.481 qpair failed and we were unable to recover it. 00:29:39.481 [2024-07-15 22:27:04.645485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.481 [2024-07-15 22:27:04.645592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.481 [2024-07-15 22:27:04.645604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.481 [2024-07-15 22:27:04.645609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.481 [2024-07-15 22:27:04.645613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.481 [2024-07-15 22:27:04.645623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.481 qpair failed and we were unable to recover it. 00:29:39.481 [2024-07-15 22:27:04.655402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.481 [2024-07-15 22:27:04.655478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.481 [2024-07-15 22:27:04.655490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.481 [2024-07-15 22:27:04.655495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.481 [2024-07-15 22:27:04.655499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.481 [2024-07-15 22:27:04.655511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.481 qpair failed and we were unable to recover it. 00:29:39.481 [2024-07-15 22:27:04.665527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.481 [2024-07-15 22:27:04.665588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.481 [2024-07-15 22:27:04.665601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.481 [2024-07-15 22:27:04.665606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.481 [2024-07-15 22:27:04.665611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.481 [2024-07-15 22:27:04.665623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.481 qpair failed and we were unable to recover it. 00:29:39.481 [2024-07-15 22:27:04.675551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.481 [2024-07-15 22:27:04.675617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.481 [2024-07-15 22:27:04.675629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.481 [2024-07-15 22:27:04.675634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.481 [2024-07-15 22:27:04.675639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.481 [2024-07-15 22:27:04.675649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.481 qpair failed and we were unable to recover it. 00:29:39.481 [2024-07-15 22:27:04.685593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.481 [2024-07-15 22:27:04.685660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.481 [2024-07-15 22:27:04.685672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.481 [2024-07-15 22:27:04.685677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.481 [2024-07-15 22:27:04.685681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.481 [2024-07-15 22:27:04.685692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.481 qpair failed and we were unable to recover it. 00:29:39.481 [2024-07-15 22:27:04.695608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.481 [2024-07-15 22:27:04.695712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.481 [2024-07-15 22:27:04.695724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.481 [2024-07-15 22:27:04.695729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.481 [2024-07-15 22:27:04.695733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.481 [2024-07-15 22:27:04.695744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.481 qpair failed and we were unable to recover it. 00:29:39.481 [2024-07-15 22:27:04.705636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.481 [2024-07-15 22:27:04.705697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.481 [2024-07-15 22:27:04.705710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.482 [2024-07-15 22:27:04.705715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.482 [2024-07-15 22:27:04.705719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.482 [2024-07-15 22:27:04.705729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.482 qpair failed and we were unable to recover it. 00:29:39.482 [2024-07-15 22:27:04.715616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.482 [2024-07-15 22:27:04.715681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.482 [2024-07-15 22:27:04.715693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.482 [2024-07-15 22:27:04.715698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.482 [2024-07-15 22:27:04.715702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.482 [2024-07-15 22:27:04.715713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.482 qpair failed and we were unable to recover it. 00:29:39.482 [2024-07-15 22:27:04.725635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.482 [2024-07-15 22:27:04.725700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.482 [2024-07-15 22:27:04.725712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.482 [2024-07-15 22:27:04.725720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.482 [2024-07-15 22:27:04.725724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.482 [2024-07-15 22:27:04.725735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.482 qpair failed and we were unable to recover it. 00:29:39.482 [2024-07-15 22:27:04.735692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.482 [2024-07-15 22:27:04.735774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.482 [2024-07-15 22:27:04.735793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.482 [2024-07-15 22:27:04.735799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.482 [2024-07-15 22:27:04.735804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.482 [2024-07-15 22:27:04.735817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.482 qpair failed and we were unable to recover it. 00:29:39.482 [2024-07-15 22:27:04.745735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.482 [2024-07-15 22:27:04.745797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.482 [2024-07-15 22:27:04.745810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.482 [2024-07-15 22:27:04.745815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.482 [2024-07-15 22:27:04.745819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.482 [2024-07-15 22:27:04.745831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.482 qpair failed and we were unable to recover it. 00:29:39.482 [2024-07-15 22:27:04.755777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.482 [2024-07-15 22:27:04.755847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.482 [2024-07-15 22:27:04.755859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.482 [2024-07-15 22:27:04.755864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.482 [2024-07-15 22:27:04.755868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.482 [2024-07-15 22:27:04.755879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.482 qpair failed and we were unable to recover it. 00:29:39.482 [2024-07-15 22:27:04.765823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.482 [2024-07-15 22:27:04.765885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.482 [2024-07-15 22:27:04.765897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.482 [2024-07-15 22:27:04.765902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.482 [2024-07-15 22:27:04.765906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.482 [2024-07-15 22:27:04.765917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.482 qpair failed and we were unable to recover it. 00:29:39.482 [2024-07-15 22:27:04.775856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.482 [2024-07-15 22:27:04.775956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.482 [2024-07-15 22:27:04.775968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.482 [2024-07-15 22:27:04.775973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.482 [2024-07-15 22:27:04.775977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.482 [2024-07-15 22:27:04.775987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.482 qpair failed and we were unable to recover it. 00:29:39.482 [2024-07-15 22:27:04.785830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.482 [2024-07-15 22:27:04.785919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.482 [2024-07-15 22:27:04.785931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.482 [2024-07-15 22:27:04.785936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.482 [2024-07-15 22:27:04.785940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.482 [2024-07-15 22:27:04.785951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.482 qpair failed and we were unable to recover it. 00:29:39.482 [2024-07-15 22:27:04.795863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.482 [2024-07-15 22:27:04.795958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.482 [2024-07-15 22:27:04.795977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.482 [2024-07-15 22:27:04.795984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.482 [2024-07-15 22:27:04.795988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.482 [2024-07-15 22:27:04.796002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.482 qpair failed and we were unable to recover it. 00:29:39.742 [2024-07-15 22:27:04.805885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.742 [2024-07-15 22:27:04.805951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.742 [2024-07-15 22:27:04.805964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.742 [2024-07-15 22:27:04.805969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.742 [2024-07-15 22:27:04.805974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.742 [2024-07-15 22:27:04.805985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.742 qpair failed and we were unable to recover it. 00:29:39.742 [2024-07-15 22:27:04.815913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.742 [2024-07-15 22:27:04.815975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.742 [2024-07-15 22:27:04.815991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.742 [2024-07-15 22:27:04.815996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.742 [2024-07-15 22:27:04.816001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.742 [2024-07-15 22:27:04.816012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.742 qpair failed and we were unable to recover it. 00:29:39.742 [2024-07-15 22:27:04.825939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.742 [2024-07-15 22:27:04.826004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.742 [2024-07-15 22:27:04.826016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.742 [2024-07-15 22:27:04.826021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.742 [2024-07-15 22:27:04.826025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.742 [2024-07-15 22:27:04.826036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.742 qpair failed and we were unable to recover it. 00:29:39.742 [2024-07-15 22:27:04.835987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.742 [2024-07-15 22:27:04.836091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.742 [2024-07-15 22:27:04.836104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.742 [2024-07-15 22:27:04.836109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.836113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.836127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:04.845956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:04.846020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:04.846032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:04.846037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.846041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.846052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:04.856033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:04.856095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:04.856107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:04.856112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.856116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.856134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:04.865951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:04.866062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:04.866074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:04.866079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.866083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.866093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:04.876072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:04.876141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:04.876153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:04.876158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.876162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.876173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:04.886151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:04.886217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:04.886229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:04.886234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.886238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.886249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:04.896149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:04.896212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:04.896224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:04.896228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.896232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.896243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:04.906165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:04.906229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:04.906244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:04.906249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.906253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.906264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:04.916188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:04.916275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:04.916287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:04.916292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.916296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.916307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:04.926229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:04.926290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:04.926302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:04.926307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.926311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.926322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:04.936244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:04.936318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:04.936330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:04.936335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.936339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.936350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:04.946282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:04.946343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:04.946355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:04.946360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.946364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.946377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:04.956297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:04.956368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:04.956379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:04.956384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.956388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.956399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:04.966311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:04.966372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:04.966384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:04.966389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.966393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.966403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:04.976366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:04.976425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:04.976437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:04.976442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.976446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.976457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:04.986415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:04.986483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:04.986495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:04.986500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.986504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.986515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:04.996406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:04.996475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:04.996488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:04.996493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:04.996497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:04.996507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:05.006414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:05.006481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:05.006493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:05.006499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:05.006503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:05.006513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:05.016464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:05.016527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:05.016540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.743 [2024-07-15 22:27:05.016545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.743 [2024-07-15 22:27:05.016549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.743 [2024-07-15 22:27:05.016559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.743 qpair failed and we were unable to recover it. 00:29:39.743 [2024-07-15 22:27:05.026509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.743 [2024-07-15 22:27:05.026571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.743 [2024-07-15 22:27:05.026583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.744 [2024-07-15 22:27:05.026588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.744 [2024-07-15 22:27:05.026592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.744 [2024-07-15 22:27:05.026603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.744 qpair failed and we were unable to recover it. 00:29:39.744 [2024-07-15 22:27:05.036531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.744 [2024-07-15 22:27:05.036598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.744 [2024-07-15 22:27:05.036610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.744 [2024-07-15 22:27:05.036615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.744 [2024-07-15 22:27:05.036622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.744 [2024-07-15 22:27:05.036632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.744 qpair failed and we were unable to recover it. 00:29:39.744 [2024-07-15 22:27:05.046537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.744 [2024-07-15 22:27:05.046600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.744 [2024-07-15 22:27:05.046611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.744 [2024-07-15 22:27:05.046616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.744 [2024-07-15 22:27:05.046620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.744 [2024-07-15 22:27:05.046631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.744 qpair failed and we were unable to recover it. 00:29:39.744 [2024-07-15 22:27:05.056603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.744 [2024-07-15 22:27:05.056663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.744 [2024-07-15 22:27:05.056675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.744 [2024-07-15 22:27:05.056680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.744 [2024-07-15 22:27:05.056684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:39.744 [2024-07-15 22:27:05.056695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.744 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 22:27:05.066592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.003 [2024-07-15 22:27:05.066655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.003 [2024-07-15 22:27:05.066667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.003 [2024-07-15 22:27:05.066672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.003 [2024-07-15 22:27:05.066676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:40.003 [2024-07-15 22:27:05.066687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 22:27:05.076616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.003 [2024-07-15 22:27:05.076687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.003 [2024-07-15 22:27:05.076699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.003 [2024-07-15 22:27:05.076703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.003 [2024-07-15 22:27:05.076707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:40.003 [2024-07-15 22:27:05.076718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 22:27:05.086644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.003 [2024-07-15 22:27:05.086707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.003 [2024-07-15 22:27:05.086719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.003 [2024-07-15 22:27:05.086724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.003 [2024-07-15 22:27:05.086728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:40.003 [2024-07-15 22:27:05.086738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 22:27:05.096684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.003 [2024-07-15 22:27:05.096747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.003 [2024-07-15 22:27:05.096759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.003 [2024-07-15 22:27:05.096764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.003 [2024-07-15 22:27:05.096768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:40.003 [2024-07-15 22:27:05.096779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 22:27:05.106776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.003 [2024-07-15 22:27:05.106889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.003 [2024-07-15 22:27:05.106907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.003 [2024-07-15 22:27:05.106913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.003 [2024-07-15 22:27:05.106918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:40.003 [2024-07-15 22:27:05.106932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 22:27:05.116719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.003 [2024-07-15 22:27:05.116792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.003 [2024-07-15 22:27:05.116811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.003 [2024-07-15 22:27:05.116817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.003 [2024-07-15 22:27:05.116822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:40.003 [2024-07-15 22:27:05.116836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 22:27:05.126664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.003 [2024-07-15 22:27:05.126774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.003 [2024-07-15 22:27:05.126787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.003 [2024-07-15 22:27:05.126797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.003 [2024-07-15 22:27:05.126801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:40.003 [2024-07-15 22:27:05.126813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 22:27:05.136816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.003 [2024-07-15 22:27:05.136882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.003 [2024-07-15 22:27:05.136901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.003 [2024-07-15 22:27:05.136907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.003 [2024-07-15 22:27:05.136911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:40.003 [2024-07-15 22:27:05.136925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 22:27:05.146774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.003 [2024-07-15 22:27:05.146870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.003 [2024-07-15 22:27:05.146884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.003 [2024-07-15 22:27:05.146889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.003 [2024-07-15 22:27:05.146893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:40.003 [2024-07-15 22:27:05.146904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 22:27:05.156838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.003 [2024-07-15 22:27:05.156930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.003 [2024-07-15 22:27:05.156943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.003 [2024-07-15 22:27:05.156948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.003 [2024-07-15 22:27:05.156952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:40.003 [2024-07-15 22:27:05.156963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 22:27:05.166865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.003 [2024-07-15 22:27:05.166929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.003 [2024-07-15 22:27:05.166941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.003 [2024-07-15 22:27:05.166946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.003 [2024-07-15 22:27:05.166950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:40.003 [2024-07-15 22:27:05.166961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 22:27:05.176911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.003 [2024-07-15 22:27:05.176979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.003 [2024-07-15 22:27:05.176998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.003 [2024-07-15 22:27:05.177004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.003 [2024-07-15 22:27:05.177009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:40.003 [2024-07-15 22:27:05.177023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 22:27:05.186954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.003 [2024-07-15 22:27:05.187018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.003 [2024-07-15 22:27:05.187031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.003 [2024-07-15 22:27:05.187036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.003 [2024-07-15 22:27:05.187040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:40.003 [2024-07-15 22:27:05.187051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 22:27:05.196974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.003 [2024-07-15 22:27:05.197081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.003 [2024-07-15 22:27:05.197093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.003 [2024-07-15 22:27:05.197098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.003 [2024-07-15 22:27:05.197102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:40.003 [2024-07-15 22:27:05.197113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 22:27:05.206991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.003 [2024-07-15 22:27:05.207053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.003 [2024-07-15 22:27:05.207065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.003 [2024-07-15 22:27:05.207069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.003 [2024-07-15 22:27:05.207073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6158000b90 00:29:40.003 [2024-07-15 22:27:05.207084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 22:27:05.207286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174ff20 is same with the state(5) to be set 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Write completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Write completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Write completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Write completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Write completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Write completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Write completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Write completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.003 starting I/O failed 00:29:40.003 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 [2024-07-15 22:27:05.207577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.004 [2024-07-15 22:27:05.217034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.004 [2024-07-15 22:27:05.217218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.004 [2024-07-15 22:27:05.217239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.004 [2024-07-15 22:27:05.217248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.004 [2024-07-15 22:27:05.217254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1742220 00:29:40.004 [2024-07-15 22:27:05.217271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 22:27:05.227058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.004 [2024-07-15 22:27:05.227148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.004 [2024-07-15 22:27:05.227174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.004 [2024-07-15 22:27:05.227183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.004 [2024-07-15 22:27:05.227189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1742220 00:29:40.004 [2024-07-15 22:27:05.227208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 [2024-07-15 22:27:05.228146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Read completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 Write completed with error (sct=0, sc=8) 00:29:40.004 starting I/O failed 00:29:40.004 [2024-07-15 22:27:05.228878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:40.004 [2024-07-15 22:27:05.237177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.004 [2024-07-15 22:27:05.237377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.004 [2024-07-15 22:27:05.237438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.004 [2024-07-15 22:27:05.237462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.004 [2024-07-15 22:27:05.237482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6160000b90 00:29:40.004 [2024-07-15 22:27:05.237528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 22:27:05.247185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.004 [2024-07-15 22:27:05.247384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.004 [2024-07-15 22:27:05.247416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.004 [2024-07-15 22:27:05.247431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.004 [2024-07-15 22:27:05.247444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6160000b90 00:29:40.004 [2024-07-15 22:27:05.247475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 22:27:05.257151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.004 [2024-07-15 22:27:05.257347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.004 [2024-07-15 22:27:05.257411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.004 [2024-07-15 22:27:05.257436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.004 [2024-07-15 22:27:05.257455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6150000b90 00:29:40.004 [2024-07-15 22:27:05.257509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 22:27:05.267202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.004 [2024-07-15 22:27:05.267384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.004 [2024-07-15 22:27:05.267427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.004 [2024-07-15 22:27:05.267449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.004 [2024-07-15 22:27:05.267467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6150000b90 00:29:40.004 [2024-07-15 22:27:05.267510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 22:27:05.268018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174ff20 (9): Bad file descriptor 00:29:40.004 Initializing NVMe Controllers 00:29:40.004 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:40.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:40.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:40.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:40.004 Initialization complete. Launching workers. 00:29:40.004 Starting thread on core 1 00:29:40.004 Starting thread on core 2 00:29:40.004 Starting thread on core 3 00:29:40.004 Starting thread on core 0 00:29:40.004 22:27:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:40.004 00:29:40.004 real 0m11.353s 00:29:40.004 user 0m20.539s 00:29:40.004 sys 0m4.090s 00:29:40.004 22:27:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:40.004 22:27:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:40.004 ************************************ 00:29:40.004 END TEST nvmf_target_disconnect_tc2 00:29:40.004 ************************************ 00:29:40.004 22:27:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:40.004 22:27:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:40.004 22:27:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:40.004 22:27:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:40.004 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:40.004 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:40.264 rmmod nvme_tcp 00:29:40.264 rmmod nvme_fabrics 00:29:40.264 rmmod nvme_keyring 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2963140 ']' 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2963140 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2963140 ']' 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2963140 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2963140 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2963140' 00:29:40.264 killing process with pid 2963140 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2963140 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2963140 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:40.264 22:27:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.838 22:27:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:42.839 00:29:42.839 real 0m21.206s 00:29:42.839 user 0m48.285s 00:29:42.839 sys 0m9.744s 00:29:42.839 22:27:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:42.839 22:27:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:42.839 ************************************ 00:29:42.839 END TEST nvmf_target_disconnect 00:29:42.839 ************************************ 00:29:42.839 22:27:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:42.839 22:27:07 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:42.839 22:27:07 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:42.839 22:27:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:42.839 22:27:07 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:42.839 00:29:42.839 real 22m40.251s 00:29:42.839 user 47m15.119s 00:29:42.839 sys 7m6.820s 00:29:42.839 22:27:07 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:42.839 22:27:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:42.839 ************************************ 00:29:42.839 END TEST nvmf_tcp 00:29:42.839 ************************************ 00:29:42.839 22:27:07 -- common/autotest_common.sh@1142 -- # return 0 00:29:42.839 22:27:07 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:42.839 22:27:07 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:42.839 22:27:07 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:42.839 22:27:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:42.839 22:27:07 -- common/autotest_common.sh@10 -- # set +x 00:29:42.839 ************************************ 00:29:42.839 START TEST spdkcli_nvmf_tcp 00:29:42.839 ************************************ 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:42.839 * Looking for test storage... 00:29:42.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:42.839 22:27:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2965073 00:29:42.840 22:27:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2965073 00:29:42.840 22:27:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2965073 ']' 00:29:42.840 22:27:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.840 22:27:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:42.840 22:27:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.840 22:27:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:42.840 22:27:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:42.840 22:27:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:42.840 [2024-07-15 22:27:07.947786] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:29:42.840 [2024-07-15 22:27:07.947858] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2965073 ] 00:29:42.840 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.840 [2024-07-15 22:27:08.011662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:42.840 [2024-07-15 22:27:08.087664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.840 [2024-07-15 22:27:08.087668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.409 22:27:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:43.409 22:27:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:29:43.409 22:27:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:43.409 22:27:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:43.409 22:27:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:43.670 22:27:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:43.670 22:27:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:43.670 22:27:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:43.670 22:27:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:43.670 22:27:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:43.670 22:27:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:43.670 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:43.670 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:43.670 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:43.670 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:43.670 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:43.670 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:43.670 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:43.670 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:43.670 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:43.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:43.670 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:43.670 ' 00:29:46.208 [2024-07-15 22:27:11.089781] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.144 [2024-07-15 22:27:12.253661] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:49.685 [2024-07-15 22:27:14.391991] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:51.064 [2024-07-15 22:27:16.229498] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:52.445 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:52.445 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:52.445 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:52.445 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:52.445 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:52.445 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:52.445 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:52.445 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:52.445 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:52.445 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:52.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:52.445 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:52.445 22:27:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:52.445 22:27:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:52.445 22:27:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.705 22:27:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:52.705 22:27:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:52.705 22:27:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.705 22:27:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:52.705 22:27:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:52.965 22:27:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:52.965 22:27:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:52.965 22:27:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:52.966 22:27:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:52.966 22:27:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.966 22:27:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:52.966 22:27:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:52.966 22:27:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.966 22:27:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:52.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:52.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:52.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:52.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:52.966 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:52.966 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:52.966 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:52.966 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:52.966 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:52.966 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:52.966 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:52.966 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:52.966 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:52.966 ' 00:29:58.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:58.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:58.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:58.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:58.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:58.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:58.244 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:58.244 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:58.244 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:58.244 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:58.244 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:58.244 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:58.244 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:58.244 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2965073 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2965073 ']' 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2965073 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2965073 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2965073' 00:29:58.244 killing process with pid 2965073 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2965073 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2965073 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2965073 ']' 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2965073 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2965073 ']' 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2965073 00:29:58.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2965073) - No such process 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2965073 is not found' 00:29:58.244 Process with pid 2965073 is not found 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:58.244 00:29:58.244 real 0m15.522s 00:29:58.244 user 0m31.995s 00:29:58.244 sys 0m0.697s 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:58.244 22:27:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:58.244 ************************************ 00:29:58.244 END TEST spdkcli_nvmf_tcp 00:29:58.244 ************************************ 00:29:58.244 22:27:23 -- common/autotest_common.sh@1142 -- # return 0 00:29:58.244 22:27:23 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:58.244 22:27:23 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:58.244 22:27:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:58.244 22:27:23 -- common/autotest_common.sh@10 -- # set +x 00:29:58.244 ************************************ 00:29:58.244 START TEST nvmf_identify_passthru 00:29:58.244 ************************************ 00:29:58.244 22:27:23 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:58.244 * Looking for test storage... 00:29:58.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:58.244 22:27:23 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.244 22:27:23 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.244 22:27:23 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.244 22:27:23 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.244 22:27:23 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.244 22:27:23 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.244 22:27:23 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.244 22:27:23 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:58.244 22:27:23 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:58.244 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:58.244 22:27:23 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.244 22:27:23 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.244 22:27:23 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.245 22:27:23 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.245 22:27:23 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.245 22:27:23 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.245 22:27:23 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.245 22:27:23 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:58.245 22:27:23 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.245 22:27:23 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:58.245 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:58.245 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.245 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:58.245 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:58.245 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:58.245 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.245 22:27:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:58.245 22:27:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.245 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:58.245 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:58.245 22:27:23 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:58.245 22:27:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:04.835 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:04.835 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:04.835 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:04.835 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:04.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:30:04.835 00:30:04.835 --- 10.0.0.2 ping statistics --- 00:30:04.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.835 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:30:04.835 00:30:04.835 --- 10.0.0.1 ping statistics --- 00:30:04.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.835 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:04.835 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.836 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:04.836 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:04.836 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.836 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:04.836 22:27:29 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:04.836 22:27:29 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:04.836 22:27:29 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:04.836 22:27:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:04.836 22:27:29 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:04.836 22:27:29 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:04.836 22:27:29 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:04.836 22:27:29 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:04.836 22:27:29 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:04.836 22:27:29 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:04.836 22:27:29 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:04.836 22:27:29 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:04.836 22:27:29 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:04.836 22:27:29 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:04.836 22:27:29 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:04.836 22:27:29 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:30:04.836 22:27:29 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:30:04.836 22:27:29 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:04.836 22:27:29 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:04.836 22:27:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:04.836 22:27:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:04.836 22:27:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:04.836 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.407 22:27:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:30:05.407 22:27:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:05.407 22:27:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:05.407 22:27:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:05.407 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.668 22:27:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:05.668 22:27:30 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:05.668 22:27:30 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:05.668 22:27:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:05.668 22:27:30 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:05.668 22:27:30 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:05.668 22:27:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:05.668 22:27:30 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2972270 00:30:05.668 22:27:30 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:05.668 22:27:30 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2972270 00:30:05.668 22:27:30 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2972270 ']' 00:30:05.668 22:27:30 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.668 22:27:30 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:05.668 22:27:30 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.668 22:27:30 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:05.668 22:27:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:05.668 22:27:30 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:05.929 [2024-07-15 22:27:31.001600] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:30:05.929 [2024-07-15 22:27:31.001648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.929 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.929 [2024-07-15 22:27:31.065089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.929 [2024-07-15 22:27:31.130363] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.929 [2024-07-15 22:27:31.130397] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.929 [2024-07-15 22:27:31.130405] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.929 [2024-07-15 22:27:31.130411] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.929 [2024-07-15 22:27:31.130417] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.929 [2024-07-15 22:27:31.130560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.929 [2024-07-15 22:27:31.130676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.929 [2024-07-15 22:27:31.130836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.929 [2024-07-15 22:27:31.130838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.499 22:27:31 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:06.499 22:27:31 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:30:06.499 22:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:06.499 22:27:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.499 22:27:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:06.499 INFO: Log level set to 20 00:30:06.499 INFO: Requests: 00:30:06.499 { 00:30:06.499 "jsonrpc": "2.0", 00:30:06.499 "method": "nvmf_set_config", 00:30:06.499 "id": 1, 00:30:06.499 "params": { 00:30:06.499 "admin_cmd_passthru": { 00:30:06.499 "identify_ctrlr": true 00:30:06.499 } 00:30:06.499 } 00:30:06.499 } 00:30:06.499 00:30:06.499 INFO: response: 00:30:06.499 { 00:30:06.499 "jsonrpc": "2.0", 00:30:06.499 "id": 1, 00:30:06.499 "result": true 00:30:06.499 } 00:30:06.499 00:30:06.499 22:27:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.499 22:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:06.499 22:27:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.499 22:27:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:06.499 INFO: Setting log level to 20 00:30:06.499 INFO: Setting log level to 20 00:30:06.499 INFO: Log level set to 20 00:30:06.499 INFO: Log level set to 20 00:30:06.499 INFO: Requests: 00:30:06.499 { 00:30:06.499 "jsonrpc": "2.0", 00:30:06.499 "method": "framework_start_init", 00:30:06.499 "id": 1 00:30:06.499 } 00:30:06.499 00:30:06.499 INFO: Requests: 00:30:06.499 { 00:30:06.499 "jsonrpc": "2.0", 00:30:06.499 "method": "framework_start_init", 00:30:06.499 "id": 1 00:30:06.499 } 00:30:06.499 00:30:06.758 [2024-07-15 22:27:31.847542] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:06.758 INFO: response: 00:30:06.758 { 00:30:06.758 "jsonrpc": "2.0", 00:30:06.758 "id": 1, 00:30:06.758 "result": true 00:30:06.758 } 00:30:06.758 00:30:06.758 INFO: response: 00:30:06.758 { 00:30:06.758 "jsonrpc": "2.0", 00:30:06.758 "id": 1, 00:30:06.758 "result": true 00:30:06.758 } 00:30:06.758 00:30:06.758 22:27:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.758 22:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:06.758 22:27:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.759 22:27:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:06.759 INFO: Setting log level to 40 00:30:06.759 INFO: Setting log level to 40 00:30:06.759 INFO: Setting log level to 40 00:30:06.759 [2024-07-15 22:27:31.860867] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.759 22:27:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.759 22:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:06.759 22:27:31 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:06.759 22:27:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:06.759 22:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:06.759 22:27:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.759 22:27:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.019 Nvme0n1 00:30:07.019 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.019 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:07.019 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.019 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.019 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.019 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:07.019 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.019 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.019 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.019 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:07.020 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.020 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.020 [2024-07-15 22:27:32.245445] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.020 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.020 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:07.020 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.020 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.020 [ 00:30:07.020 { 00:30:07.020 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:07.020 "subtype": "Discovery", 00:30:07.020 "listen_addresses": [], 00:30:07.020 "allow_any_host": true, 00:30:07.020 "hosts": [] 00:30:07.020 }, 00:30:07.020 { 00:30:07.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:07.020 "subtype": "NVMe", 00:30:07.020 "listen_addresses": [ 00:30:07.020 { 00:30:07.020 "trtype": "TCP", 00:30:07.020 "adrfam": "IPv4", 00:30:07.020 "traddr": "10.0.0.2", 00:30:07.020 "trsvcid": "4420" 00:30:07.020 } 00:30:07.020 ], 00:30:07.020 "allow_any_host": true, 00:30:07.020 "hosts": [], 00:30:07.020 "serial_number": "SPDK00000000000001", 00:30:07.020 "model_number": "SPDK bdev Controller", 00:30:07.020 "max_namespaces": 1, 00:30:07.020 "min_cntlid": 1, 00:30:07.020 "max_cntlid": 65519, 00:30:07.020 "namespaces": [ 00:30:07.020 { 00:30:07.020 "nsid": 1, 00:30:07.020 "bdev_name": "Nvme0n1", 00:30:07.020 "name": "Nvme0n1", 00:30:07.020 "nguid": "36344730526054870025384500000044", 00:30:07.020 "uuid": "36344730-5260-5487-0025-384500000044" 00:30:07.020 } 00:30:07.020 ] 00:30:07.020 } 00:30:07.020 ] 00:30:07.020 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.020 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:07.020 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:07.020 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:07.020 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.279 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:30:07.279 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:07.279 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:07.279 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:07.279 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.538 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:07.538 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:30:07.538 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:07.538 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:07.538 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.538 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.538 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.538 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:07.538 22:27:32 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:07.538 22:27:32 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:07.538 22:27:32 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:07.538 22:27:32 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:07.538 22:27:32 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:07.538 22:27:32 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:07.538 22:27:32 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:07.538 rmmod nvme_tcp 00:30:07.538 rmmod nvme_fabrics 00:30:07.538 rmmod nvme_keyring 00:30:07.538 22:27:32 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:07.538 22:27:32 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:07.538 22:27:32 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:07.538 22:27:32 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2972270 ']' 00:30:07.538 22:27:32 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2972270 00:30:07.538 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2972270 ']' 00:30:07.538 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2972270 00:30:07.538 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:30:07.538 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:07.538 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2972270 00:30:07.538 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:07.538 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:07.538 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2972270' 00:30:07.538 killing process with pid 2972270 00:30:07.538 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2972270 00:30:07.538 22:27:32 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2972270 00:30:07.798 22:27:33 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:07.798 22:27:33 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:07.798 22:27:33 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:07.798 22:27:33 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:07.798 22:27:33 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:07.798 22:27:33 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.798 22:27:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:07.798 22:27:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.341 22:27:35 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:10.341 00:30:10.341 real 0m11.691s 00:30:10.341 user 0m9.470s 00:30:10.341 sys 0m5.481s 00:30:10.341 22:27:35 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:10.341 22:27:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:10.341 ************************************ 00:30:10.341 END TEST nvmf_identify_passthru 00:30:10.341 ************************************ 00:30:10.341 22:27:35 -- common/autotest_common.sh@1142 -- # return 0 00:30:10.341 22:27:35 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:10.341 22:27:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:10.341 22:27:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:10.341 22:27:35 -- common/autotest_common.sh@10 -- # set +x 00:30:10.341 ************************************ 00:30:10.341 START TEST nvmf_dif 00:30:10.341 ************************************ 00:30:10.341 22:27:35 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:10.341 * Looking for test storage... 00:30:10.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:10.341 22:27:35 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.341 22:27:35 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.341 22:27:35 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.341 22:27:35 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.341 22:27:35 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.341 22:27:35 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.341 22:27:35 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.342 22:27:35 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.342 22:27:35 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:10.342 22:27:35 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:10.342 22:27:35 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:10.342 22:27:35 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:10.342 22:27:35 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:10.342 22:27:35 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:10.342 22:27:35 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.342 22:27:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:10.342 22:27:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:10.342 22:27:35 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:10.342 22:27:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:16.974 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:16.974 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:16.974 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:16.974 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:16.974 22:27:41 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:16.974 22:27:42 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.974 22:27:42 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.974 22:27:42 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.974 22:27:42 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.974 22:27:42 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:16.974 22:27:42 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.974 22:27:42 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.974 22:27:42 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.974 22:27:42 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:16.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:30:16.974 00:30:16.974 --- 10.0.0.2 ping statistics --- 00:30:16.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.974 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:30:16.974 22:27:42 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:30:17.234 00:30:17.234 --- 10.0.0.1 ping statistics --- 00:30:17.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.234 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:30:17.234 22:27:42 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.234 22:27:42 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:17.234 22:27:42 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:17.234 22:27:42 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:20.523 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:20.523 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:20.523 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:20.523 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:20.523 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:20.523 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:20.523 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:20.523 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:20.523 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:20.523 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:20.523 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:20.523 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:20.523 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:20.523 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:20.523 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:20.523 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:20.523 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:20.784 22:27:45 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:20.784 22:27:45 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:20.784 22:27:45 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:20.784 22:27:45 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:20.784 22:27:45 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:20.784 22:27:45 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:20.784 22:27:45 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:20.784 22:27:45 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:20.784 22:27:45 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:20.784 22:27:45 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:20.784 22:27:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:20.784 22:27:45 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2978267 00:30:20.784 22:27:45 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2978267 00:30:20.784 22:27:45 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:20.784 22:27:45 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2978267 ']' 00:30:20.784 22:27:45 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.784 22:27:45 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:20.784 22:27:45 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.784 22:27:45 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:20.784 22:27:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:20.784 [2024-07-15 22:27:46.011244] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:30:20.784 [2024-07-15 22:27:46.011295] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.784 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.784 [2024-07-15 22:27:46.081276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.044 [2024-07-15 22:27:46.157429] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.044 [2024-07-15 22:27:46.157467] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.044 [2024-07-15 22:27:46.157474] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.044 [2024-07-15 22:27:46.157481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.044 [2024-07-15 22:27:46.157486] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.044 [2024-07-15 22:27:46.157505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.615 22:27:46 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:21.615 22:27:46 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:30:21.615 22:27:46 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:21.615 22:27:46 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:21.615 22:27:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:21.615 22:27:46 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:21.615 22:27:46 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:21.615 22:27:46 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:21.615 22:27:46 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.615 22:27:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:21.615 [2024-07-15 22:27:46.821378] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.615 22:27:46 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.615 22:27:46 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:21.615 22:27:46 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:21.615 22:27:46 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:21.615 22:27:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:21.615 ************************************ 00:30:21.615 START TEST fio_dif_1_default 00:30:21.615 ************************************ 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:21.615 bdev_null0 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:21.615 [2024-07-15 22:27:46.905693] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:21.615 { 00:30:21.615 "params": { 00:30:21.615 "name": "Nvme$subsystem", 00:30:21.615 "trtype": "$TEST_TRANSPORT", 00:30:21.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:21.615 "adrfam": "ipv4", 00:30:21.615 "trsvcid": "$NVMF_PORT", 00:30:21.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:21.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:21.615 "hdgst": ${hdgst:-false}, 00:30:21.615 "ddgst": ${ddgst:-false} 00:30:21.615 }, 00:30:21.615 "method": "bdev_nvme_attach_controller" 00:30:21.615 } 00:30:21.615 EOF 00:30:21.615 )") 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:21.615 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:21.616 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:21.616 22:27:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:21.616 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:21.616 22:27:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:21.616 22:27:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:21.616 22:27:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:21.616 "params": { 00:30:21.616 "name": "Nvme0", 00:30:21.616 "trtype": "tcp", 00:30:21.616 "traddr": "10.0.0.2", 00:30:21.616 "adrfam": "ipv4", 00:30:21.616 "trsvcid": "4420", 00:30:21.616 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:21.616 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:21.616 "hdgst": false, 00:30:21.616 "ddgst": false 00:30:21.616 }, 00:30:21.616 "method": "bdev_nvme_attach_controller" 00:30:21.616 }' 00:30:21.913 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:21.914 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:21.914 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.914 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:21.914 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:21.914 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:21.914 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:21.914 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:21.914 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:21.914 22:27:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:22.183 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:22.183 fio-3.35 00:30:22.183 Starting 1 thread 00:30:22.183 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.391 00:30:34.391 filename0: (groupid=0, jobs=1): err= 0: pid=2978819: Mon Jul 15 22:27:57 2024 00:30:34.391 read: IOPS=185, BW=742KiB/s (760kB/s)(7440KiB/10022msec) 00:30:34.391 slat (nsec): min=5402, max=32055, avg=6186.53, stdev=1372.72 00:30:34.391 clat (usec): min=752, max=44089, avg=21534.03, stdev=20375.26 00:30:34.391 lat (usec): min=760, max=44121, avg=21540.21, stdev=20375.26 00:30:34.391 clat percentiles (usec): 00:30:34.391 | 1.00th=[ 1074], 5.00th=[ 1090], 10.00th=[ 1106], 20.00th=[ 1123], 00:30:34.391 | 30.00th=[ 1123], 40.00th=[ 1139], 50.00th=[41157], 60.00th=[41681], 00:30:34.391 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:34.391 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:30:34.391 | 99.99th=[44303] 00:30:34.391 bw ( KiB/s): min= 704, max= 768, per=99.95%, avg=742.40, stdev=30.45, samples=20 00:30:34.391 iops : min= 176, max= 192, avg=185.60, stdev= 7.61, samples=20 00:30:34.391 lat (usec) : 1000=0.43% 00:30:34.391 lat (msec) : 2=49.46%, 50=50.11% 00:30:34.391 cpu : usr=95.35%, sys=4.45%, ctx=10, majf=0, minf=237 00:30:34.391 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:34.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.391 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.391 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:34.391 00:30:34.391 Run status group 0 (all jobs): 00:30:34.391 READ: bw=742KiB/s (760kB/s), 742KiB/s-742KiB/s (760kB/s-760kB/s), io=7440KiB (7619kB), run=10022-10022msec 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.391 00:30:34.391 real 0m11.209s 00:30:34.391 user 0m24.774s 00:30:34.391 sys 0m0.787s 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:34.391 ************************************ 00:30:34.391 END TEST fio_dif_1_default 00:30:34.391 ************************************ 00:30:34.391 22:27:58 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:34.391 22:27:58 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:34.391 22:27:58 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:34.391 22:27:58 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:34.391 22:27:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:34.391 ************************************ 00:30:34.391 START TEST fio_dif_1_multi_subsystems 00:30:34.391 ************************************ 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.391 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:34.391 bdev_null0 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:34.392 [2024-07-15 22:27:58.193498] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:34.392 bdev_null1 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:34.392 { 00:30:34.392 "params": { 00:30:34.392 "name": "Nvme$subsystem", 00:30:34.392 "trtype": "$TEST_TRANSPORT", 00:30:34.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.392 "adrfam": "ipv4", 00:30:34.392 "trsvcid": "$NVMF_PORT", 00:30:34.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.392 "hdgst": ${hdgst:-false}, 00:30:34.392 "ddgst": ${ddgst:-false} 00:30:34.392 }, 00:30:34.392 "method": "bdev_nvme_attach_controller" 00:30:34.392 } 00:30:34.392 EOF 00:30:34.392 )") 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:34.392 { 00:30:34.392 "params": { 00:30:34.392 "name": "Nvme$subsystem", 00:30:34.392 "trtype": "$TEST_TRANSPORT", 00:30:34.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.392 "adrfam": "ipv4", 00:30:34.392 "trsvcid": "$NVMF_PORT", 00:30:34.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.392 "hdgst": ${hdgst:-false}, 00:30:34.392 "ddgst": ${ddgst:-false} 00:30:34.392 }, 00:30:34.392 "method": "bdev_nvme_attach_controller" 00:30:34.392 } 00:30:34.392 EOF 00:30:34.392 )") 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:34.392 "params": { 00:30:34.392 "name": "Nvme0", 00:30:34.392 "trtype": "tcp", 00:30:34.392 "traddr": "10.0.0.2", 00:30:34.392 "adrfam": "ipv4", 00:30:34.392 "trsvcid": "4420", 00:30:34.392 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:34.392 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:34.392 "hdgst": false, 00:30:34.392 "ddgst": false 00:30:34.392 }, 00:30:34.392 "method": "bdev_nvme_attach_controller" 00:30:34.392 },{ 00:30:34.392 "params": { 00:30:34.392 "name": "Nvme1", 00:30:34.392 "trtype": "tcp", 00:30:34.392 "traddr": "10.0.0.2", 00:30:34.392 "adrfam": "ipv4", 00:30:34.392 "trsvcid": "4420", 00:30:34.392 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.392 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:34.392 "hdgst": false, 00:30:34.392 "ddgst": false 00:30:34.392 }, 00:30:34.392 "method": "bdev_nvme_attach_controller" 00:30:34.392 }' 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:34.392 22:27:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.392 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:34.392 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:34.392 fio-3.35 00:30:34.392 Starting 2 threads 00:30:34.392 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.370 00:30:44.370 filename0: (groupid=0, jobs=1): err= 0: pid=2981166: Mon Jul 15 22:28:09 2024 00:30:44.370 read: IOPS=185, BW=741KiB/s (759kB/s)(7440KiB/10035msec) 00:30:44.370 slat (nsec): min=5458, max=41728, avg=6666.88, stdev=1732.42 00:30:44.370 clat (usec): min=1124, max=42747, avg=21561.63, stdev=20080.64 00:30:44.370 lat (usec): min=1130, max=42780, avg=21568.29, stdev=20080.46 00:30:44.370 clat percentiles (usec): 00:30:44.370 | 1.00th=[ 1205], 5.00th=[ 1401], 10.00th=[ 1418], 20.00th=[ 1434], 00:30:44.370 | 30.00th=[ 1467], 40.00th=[ 1483], 50.00th=[41157], 60.00th=[41681], 00:30:44.370 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:44.370 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:30:44.370 | 99.99th=[42730] 00:30:44.370 bw ( KiB/s): min= 704, max= 768, per=50.09%, avg=742.40, stdev=30.45, samples=20 00:30:44.370 iops : min= 176, max= 192, avg=185.60, stdev= 7.61, samples=20 00:30:44.370 lat (msec) : 2=49.89%, 50=50.11% 00:30:44.370 cpu : usr=97.84%, sys=1.93%, ctx=17, majf=0, minf=165 00:30:44.370 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:44.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.370 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:44.370 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:44.370 filename1: (groupid=0, jobs=1): err= 0: pid=2981167: Mon Jul 15 22:28:09 2024 00:30:44.370 read: IOPS=185, BW=741KiB/s (758kB/s)(7424KiB/10025msec) 00:30:44.370 slat (nsec): min=5475, max=41866, avg=6518.57, stdev=1647.73 00:30:44.370 clat (usec): min=1172, max=42920, avg=21587.41, stdev=20068.94 00:30:44.370 lat (usec): min=1178, max=42957, avg=21593.93, stdev=20068.80 00:30:44.370 clat percentiles (usec): 00:30:44.370 | 1.00th=[ 1221], 5.00th=[ 1385], 10.00th=[ 1418], 20.00th=[ 1434], 00:30:44.370 | 30.00th=[ 1467], 40.00th=[ 1483], 50.00th=[41157], 60.00th=[41681], 00:30:44.370 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:44.370 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:30:44.370 | 99.99th=[42730] 00:30:44.370 bw ( KiB/s): min= 672, max= 768, per=49.96%, avg=740.80, stdev=34.86, samples=20 00:30:44.370 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:30:44.370 lat (msec) : 2=49.78%, 50=50.22% 00:30:44.370 cpu : usr=98.17%, sys=1.62%, ctx=13, majf=0, minf=171 00:30:44.370 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:44.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.370 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:44.370 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:44.370 00:30:44.370 Run status group 0 (all jobs): 00:30:44.370 READ: bw=1481KiB/s (1517kB/s), 741KiB/s-741KiB/s (758kB/s-759kB/s), io=14.5MiB (15.2MB), run=10025-10035msec 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.370 00:30:44.370 real 0m11.377s 00:30:44.370 user 0m32.298s 00:30:44.370 sys 0m0.715s 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:44.370 22:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:44.370 ************************************ 00:30:44.370 END TEST fio_dif_1_multi_subsystems 00:30:44.370 ************************************ 00:30:44.370 22:28:09 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:44.370 22:28:09 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:44.370 22:28:09 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:44.370 22:28:09 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:44.370 22:28:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:44.370 ************************************ 00:30:44.370 START TEST fio_dif_rand_params 00:30:44.370 ************************************ 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:44.370 bdev_null0 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:44.370 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:44.371 [2024-07-15 22:28:09.632971] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:44.371 { 00:30:44.371 "params": { 00:30:44.371 "name": "Nvme$subsystem", 00:30:44.371 "trtype": "$TEST_TRANSPORT", 00:30:44.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:44.371 "adrfam": "ipv4", 00:30:44.371 "trsvcid": "$NVMF_PORT", 00:30:44.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:44.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:44.371 "hdgst": ${hdgst:-false}, 00:30:44.371 "ddgst": ${ddgst:-false} 00:30:44.371 }, 00:30:44.371 "method": "bdev_nvme_attach_controller" 00:30:44.371 } 00:30:44.371 EOF 00:30:44.371 )") 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:44.371 "params": { 00:30:44.371 "name": "Nvme0", 00:30:44.371 "trtype": "tcp", 00:30:44.371 "traddr": "10.0.0.2", 00:30:44.371 "adrfam": "ipv4", 00:30:44.371 "trsvcid": "4420", 00:30:44.371 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:44.371 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:44.371 "hdgst": false, 00:30:44.371 "ddgst": false 00:30:44.371 }, 00:30:44.371 "method": "bdev_nvme_attach_controller" 00:30:44.371 }' 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:44.371 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:44.670 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:44.670 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:44.670 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:44.670 22:28:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:44.947 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:44.947 ... 00:30:44.947 fio-3.35 00:30:44.947 Starting 3 threads 00:30:44.947 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.526 00:30:51.526 filename0: (groupid=0, jobs=1): err= 0: pid=2983408: Mon Jul 15 22:28:15 2024 00:30:51.526 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(129MiB/5010msec) 00:30:51.526 slat (nsec): min=5436, max=37411, avg=8138.13, stdev=1482.63 00:30:51.526 clat (usec): min=5314, max=55008, avg=14578.53, stdev=14385.76 00:30:51.526 lat (usec): min=5320, max=55016, avg=14586.66, stdev=14385.81 00:30:51.526 clat percentiles (usec): 00:30:51.526 | 1.00th=[ 5538], 5.00th=[ 6063], 10.00th=[ 6521], 20.00th=[ 7242], 00:30:51.526 | 30.00th=[ 7898], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[10028], 00:30:51.526 | 70.00th=[10945], 80.00th=[12125], 90.00th=[49546], 95.00th=[51643], 00:30:51.526 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:30:51.526 | 99.99th=[54789] 00:30:51.526 bw ( KiB/s): min=16896, max=47872, per=39.25%, avg=26291.20, stdev=9452.41, samples=10 00:30:51.526 iops : min= 132, max= 374, avg=205.40, stdev=73.85, samples=10 00:30:51.526 lat (msec) : 10=59.13%, 20=27.77%, 50=3.50%, 100=9.61% 00:30:51.526 cpu : usr=95.17%, sys=4.45%, ctx=16, majf=0, minf=79 00:30:51.526 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:51.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.526 issued rwts: total=1030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.526 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:51.526 filename0: (groupid=0, jobs=1): err= 0: pid=2983410: Mon Jul 15 22:28:15 2024 00:30:51.526 read: IOPS=200, BW=25.0MiB/s (26.3MB/s)(126MiB/5045msec) 00:30:51.526 slat (nsec): min=5446, max=33028, avg=7765.20, stdev=1703.20 00:30:51.526 clat (usec): min=5441, max=93892, avg=14917.37, stdev=15028.26 00:30:51.526 lat (usec): min=5447, max=93899, avg=14925.13, stdev=15028.35 00:30:51.526 clat percentiles (usec): 00:30:51.526 | 1.00th=[ 5866], 5.00th=[ 6456], 10.00th=[ 6783], 20.00th=[ 7701], 00:30:51.526 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10290], 00:30:51.526 | 70.00th=[11076], 80.00th=[12387], 90.00th=[50070], 95.00th=[52167], 00:30:51.526 | 99.00th=[54789], 99.50th=[56361], 99.90th=[93848], 99.95th=[93848], 00:30:51.526 | 99.99th=[93848] 00:30:51.526 bw ( KiB/s): min=13056, max=35840, per=38.56%, avg=25830.40, stdev=6549.52, samples=10 00:30:51.526 iops : min= 102, max= 280, avg=201.80, stdev=51.17, samples=10 00:30:51.526 lat (msec) : 10=56.78%, 20=30.46%, 50=3.07%, 100=9.69% 00:30:51.526 cpu : usr=96.27%, sys=3.41%, ctx=13, majf=0, minf=73 00:30:51.526 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:51.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.526 issued rwts: total=1011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.526 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:51.526 filename0: (groupid=0, jobs=1): err= 0: pid=2983411: Mon Jul 15 22:28:15 2024 00:30:51.526 read: IOPS=119, BW=14.9MiB/s (15.7MB/s)(74.9MiB/5012msec) 00:30:51.526 slat (nsec): min=5439, max=53205, avg=8110.85, stdev=2602.49 00:30:51.526 clat (usec): min=8186, max=95917, avg=25086.64, stdev=20482.22 00:30:51.526 lat (usec): min=8195, max=95923, avg=25094.75, stdev=20481.84 00:30:51.526 clat percentiles (usec): 00:30:51.526 | 1.00th=[ 8848], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[11207], 00:30:51.526 | 30.00th=[12125], 40.00th=[12911], 50.00th=[13829], 60.00th=[15270], 00:30:51.526 | 70.00th=[17171], 80.00th=[53740], 90.00th=[55313], 95.00th=[56361], 00:30:51.526 | 99.00th=[93848], 99.50th=[94897], 99.90th=[95945], 99.95th=[95945], 00:30:51.526 | 99.99th=[95945] 00:30:51.526 bw ( KiB/s): min= 6144, max=25344, per=22.78%, avg=15257.60, stdev=5130.23, samples=10 00:30:51.526 iops : min= 48, max= 198, avg=119.20, stdev=40.08, samples=10 00:30:51.526 lat (msec) : 10=7.18%, 20=64.94%, 100=27.88% 00:30:51.526 cpu : usr=97.21%, sys=2.55%, ctx=15, majf=0, minf=156 00:30:51.526 IO depths : 1=2.2%, 2=97.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:51.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.526 issued rwts: total=599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.526 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:51.526 00:30:51.526 Run status group 0 (all jobs): 00:30:51.526 READ: bw=65.4MiB/s (68.6MB/s), 14.9MiB/s-25.7MiB/s (15.7MB/s-26.9MB/s), io=330MiB (346MB), run=5010-5045msec 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:51.526 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.527 bdev_null0 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.527 [2024-07-15 22:28:15.749379] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.527 bdev_null1 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.527 bdev_null2 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:51.527 { 00:30:51.527 "params": { 00:30:51.527 "name": "Nvme$subsystem", 00:30:51.527 "trtype": "$TEST_TRANSPORT", 00:30:51.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.527 "adrfam": "ipv4", 00:30:51.527 "trsvcid": "$NVMF_PORT", 00:30:51.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.527 "hdgst": ${hdgst:-false}, 00:30:51.527 "ddgst": ${ddgst:-false} 00:30:51.527 }, 00:30:51.527 "method": "bdev_nvme_attach_controller" 00:30:51.527 } 00:30:51.527 EOF 00:30:51.527 )") 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:51.527 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:51.528 { 00:30:51.528 "params": { 00:30:51.528 "name": "Nvme$subsystem", 00:30:51.528 "trtype": "$TEST_TRANSPORT", 00:30:51.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.528 "adrfam": "ipv4", 00:30:51.528 "trsvcid": "$NVMF_PORT", 00:30:51.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.528 "hdgst": ${hdgst:-false}, 00:30:51.528 "ddgst": ${ddgst:-false} 00:30:51.528 }, 00:30:51.528 "method": "bdev_nvme_attach_controller" 00:30:51.528 } 00:30:51.528 EOF 00:30:51.528 )") 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:51.528 { 00:30:51.528 "params": { 00:30:51.528 "name": "Nvme$subsystem", 00:30:51.528 "trtype": "$TEST_TRANSPORT", 00:30:51.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.528 "adrfam": "ipv4", 00:30:51.528 "trsvcid": "$NVMF_PORT", 00:30:51.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.528 "hdgst": ${hdgst:-false}, 00:30:51.528 "ddgst": ${ddgst:-false} 00:30:51.528 }, 00:30:51.528 "method": "bdev_nvme_attach_controller" 00:30:51.528 } 00:30:51.528 EOF 00:30:51.528 )") 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:51.528 "params": { 00:30:51.528 "name": "Nvme0", 00:30:51.528 "trtype": "tcp", 00:30:51.528 "traddr": "10.0.0.2", 00:30:51.528 "adrfam": "ipv4", 00:30:51.528 "trsvcid": "4420", 00:30:51.528 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:51.528 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:51.528 "hdgst": false, 00:30:51.528 "ddgst": false 00:30:51.528 }, 00:30:51.528 "method": "bdev_nvme_attach_controller" 00:30:51.528 },{ 00:30:51.528 "params": { 00:30:51.528 "name": "Nvme1", 00:30:51.528 "trtype": "tcp", 00:30:51.528 "traddr": "10.0.0.2", 00:30:51.528 "adrfam": "ipv4", 00:30:51.528 "trsvcid": "4420", 00:30:51.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:51.528 "hdgst": false, 00:30:51.528 "ddgst": false 00:30:51.528 }, 00:30:51.528 "method": "bdev_nvme_attach_controller" 00:30:51.528 },{ 00:30:51.528 "params": { 00:30:51.528 "name": "Nvme2", 00:30:51.528 "trtype": "tcp", 00:30:51.528 "traddr": "10.0.0.2", 00:30:51.528 "adrfam": "ipv4", 00:30:51.528 "trsvcid": "4420", 00:30:51.528 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:51.528 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:51.528 "hdgst": false, 00:30:51.528 "ddgst": false 00:30:51.528 }, 00:30:51.528 "method": "bdev_nvme_attach_controller" 00:30:51.528 }' 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:51.528 22:28:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:51.528 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:51.528 ... 00:30:51.528 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:51.528 ... 00:30:51.528 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:51.528 ... 00:30:51.528 fio-3.35 00:30:51.528 Starting 24 threads 00:30:51.528 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.820 00:31:03.820 filename0: (groupid=0, jobs=1): err= 0: pid=2984870: Mon Jul 15 22:28:27 2024 00:31:03.820 read: IOPS=506, BW=2026KiB/s (2075kB/s)(19.8MiB/10022msec) 00:31:03.820 slat (nsec): min=5438, max=85307, avg=11479.95, stdev=9228.73 00:31:03.820 clat (usec): min=10458, max=57983, avg=31503.12, stdev=5047.37 00:31:03.820 lat (usec): min=10467, max=58020, avg=31514.60, stdev=5048.40 00:31:03.820 clat percentiles (usec): 00:31:03.820 | 1.00th=[16909], 5.00th=[21627], 10.00th=[26346], 20.00th=[30540], 00:31:03.820 | 30.00th=[31065], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:03.820 | 70.00th=[32375], 80.00th=[33162], 90.00th=[33817], 95.00th=[38536], 00:31:03.820 | 99.00th=[50070], 99.50th=[53740], 99.90th=[57934], 99.95th=[57934], 00:31:03.820 | 99.99th=[57934] 00:31:03.820 bw ( KiB/s): min= 1920, max= 2304, per=4.25%, avg=2023.75, stdev=99.10, samples=20 00:31:03.820 iops : min= 480, max= 576, avg=505.90, stdev=24.76, samples=20 00:31:03.820 lat (msec) : 20=3.15%, 50=95.72%, 100=1.12% 00:31:03.820 cpu : usr=98.65%, sys=1.00%, ctx=22, majf=0, minf=117 00:31:03.820 IO depths : 1=4.1%, 2=8.3%, 4=18.4%, 8=60.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:31:03.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.820 complete : 0=0.0%, 4=92.6%, 8=2.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.820 issued rwts: total=5076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.820 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.820 filename0: (groupid=0, jobs=1): err= 0: pid=2984871: Mon Jul 15 22:28:27 2024 00:31:03.820 read: IOPS=505, BW=2021KiB/s (2069kB/s)(19.8MiB/10020msec) 00:31:03.820 slat (usec): min=5, max=110, avg=10.60, stdev= 9.25 00:31:03.820 clat (usec): min=12551, max=56126, avg=31593.85, stdev=3811.73 00:31:03.820 lat (usec): min=12559, max=56135, avg=31604.44, stdev=3811.75 00:31:03.820 clat percentiles (usec): 00:31:03.820 | 1.00th=[19006], 5.00th=[23725], 10.00th=[28967], 20.00th=[30802], 00:31:03.820 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:03.820 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[36439], 00:31:03.820 | 99.00th=[45351], 99.50th=[46924], 99.90th=[55837], 99.95th=[55837], 00:31:03.820 | 99.99th=[56361] 00:31:03.820 bw ( KiB/s): min= 1848, max= 2128, per=4.24%, avg=2018.40, stdev=79.41, samples=20 00:31:03.820 iops : min= 462, max= 532, avg=504.60, stdev=19.85, samples=20 00:31:03.820 lat (msec) : 20=1.70%, 50=97.98%, 100=0.32% 00:31:03.820 cpu : usr=98.88%, sys=0.78%, ctx=30, majf=0, minf=39 00:31:03.820 IO depths : 1=4.0%, 2=8.0%, 4=17.9%, 8=60.6%, 16=9.5%, 32=0.0%, >=64=0.0% 00:31:03.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.820 complete : 0=0.0%, 4=92.5%, 8=2.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.820 issued rwts: total=5062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.820 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.820 filename0: (groupid=0, jobs=1): err= 0: pid=2984872: Mon Jul 15 22:28:27 2024 00:31:03.820 read: IOPS=472, BW=1889KiB/s (1934kB/s)(18.4MiB/10003msec) 00:31:03.820 slat (usec): min=5, max=104, avg=14.73, stdev=12.71 00:31:03.820 clat (usec): min=10329, max=62273, avg=33782.71, stdev=5871.01 00:31:03.820 lat (usec): min=10359, max=62298, avg=33797.44, stdev=5871.10 00:31:03.820 clat percentiles (usec): 00:31:03.820 | 1.00th=[19268], 5.00th=[28181], 10.00th=[30540], 20.00th=[31327], 00:31:03.820 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32637], 00:31:03.820 | 70.00th=[33162], 80.00th=[35390], 90.00th=[42206], 95.00th=[45351], 00:31:03.821 | 99.00th=[54264], 99.50th=[54789], 99.90th=[60031], 99.95th=[62129], 00:31:03.821 | 99.99th=[62129] 00:31:03.821 bw ( KiB/s): min= 1536, max= 2048, per=3.95%, avg=1881.00, stdev=126.15, samples=19 00:31:03.821 iops : min= 384, max= 512, avg=470.21, stdev=31.52, samples=19 00:31:03.821 lat (msec) : 20=1.25%, 50=96.55%, 100=2.20% 00:31:03.821 cpu : usr=98.97%, sys=0.71%, ctx=22, majf=0, minf=75 00:31:03.821 IO depths : 1=2.7%, 2=5.5%, 4=16.0%, 8=64.6%, 16=11.2%, 32=0.0%, >=64=0.0% 00:31:03.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.821 complete : 0=0.0%, 4=92.2%, 8=3.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.821 issued rwts: total=4723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.821 filename0: (groupid=0, jobs=1): err= 0: pid=2984873: Mon Jul 15 22:28:27 2024 00:31:03.821 read: IOPS=491, BW=1965KiB/s (2012kB/s)(19.2MiB/10011msec) 00:31:03.821 slat (usec): min=5, max=106, avg=20.35, stdev=16.16 00:31:03.821 clat (usec): min=8554, max=54691, avg=32428.17, stdev=5194.67 00:31:03.821 lat (usec): min=8563, max=54697, avg=32448.53, stdev=5194.07 00:31:03.821 clat percentiles (usec): 00:31:03.821 | 1.00th=[17695], 5.00th=[23725], 10.00th=[28967], 20.00th=[30802], 00:31:03.821 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32375], 00:31:03.821 | 70.00th=[32637], 80.00th=[33424], 90.00th=[38536], 95.00th=[43779], 00:31:03.821 | 99.00th=[49546], 99.50th=[51119], 99.90th=[54789], 99.95th=[54789], 00:31:03.821 | 99.99th=[54789] 00:31:03.821 bw ( KiB/s): min= 1824, max= 2048, per=4.12%, avg=1962.95, stdev=64.40, samples=19 00:31:03.821 iops : min= 456, max= 512, avg=490.74, stdev=16.10, samples=19 00:31:03.821 lat (msec) : 10=0.08%, 20=1.89%, 50=97.15%, 100=0.87% 00:31:03.821 cpu : usr=98.63%, sys=0.87%, ctx=44, majf=0, minf=37 00:31:03.821 IO depths : 1=1.4%, 2=3.9%, 4=14.7%, 8=66.8%, 16=13.1%, 32=0.0%, >=64=0.0% 00:31:03.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.821 complete : 0=0.0%, 4=92.4%, 8=3.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.821 issued rwts: total=4918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.821 filename0: (groupid=0, jobs=1): err= 0: pid=2984874: Mon Jul 15 22:28:27 2024 00:31:03.821 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.5MiB/10003msec) 00:31:03.821 slat (usec): min=5, max=114, avg=20.19, stdev=16.06 00:31:03.821 clat (usec): min=5938, max=76466, avg=33587.36, stdev=5842.32 00:31:03.821 lat (usec): min=5943, max=76496, avg=33607.55, stdev=5841.15 00:31:03.821 clat percentiles (usec): 00:31:03.821 | 1.00th=[19268], 5.00th=[27132], 10.00th=[30278], 20.00th=[31065], 00:31:03.821 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32637], 00:31:03.821 | 70.00th=[33162], 80.00th=[34866], 90.00th=[42206], 95.00th=[44303], 00:31:03.821 | 99.00th=[52167], 99.50th=[56361], 99.90th=[76022], 99.95th=[76022], 00:31:03.821 | 99.99th=[76022] 00:31:03.821 bw ( KiB/s): min= 1664, max= 2048, per=3.97%, avg=1889.00, stdev=111.55, samples=19 00:31:03.821 iops : min= 416, max= 512, avg=472.21, stdev=27.94, samples=19 00:31:03.821 lat (msec) : 10=0.06%, 20=1.22%, 50=96.59%, 100=2.13% 00:31:03.821 cpu : usr=98.91%, sys=0.74%, ctx=15, majf=0, minf=64 00:31:03.821 IO depths : 1=1.7%, 2=3.7%, 4=13.6%, 8=68.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:31:03.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.821 complete : 0=0.0%, 4=91.8%, 8=4.1%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.821 issued rwts: total=4745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.821 filename0: (groupid=0, jobs=1): err= 0: pid=2984875: Mon Jul 15 22:28:27 2024 00:31:03.821 read: IOPS=499, BW=1997KiB/s (2045kB/s)(19.5MiB/10017msec) 00:31:03.821 slat (usec): min=5, max=119, avg=16.58, stdev=14.40 00:31:03.821 clat (usec): min=13470, max=61438, avg=31912.34, stdev=3756.12 00:31:03.821 lat (usec): min=13530, max=61456, avg=31928.91, stdev=3755.67 00:31:03.821 clat percentiles (usec): 00:31:03.821 | 1.00th=[20579], 5.00th=[27657], 10.00th=[30278], 20.00th=[30802], 00:31:03.821 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:03.821 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[35390], 00:31:03.821 | 99.00th=[49021], 99.50th=[50594], 99.90th=[56361], 99.95th=[61604], 00:31:03.821 | 99.99th=[61604] 00:31:03.821 bw ( KiB/s): min= 1920, max= 2128, per=4.20%, avg=1999.32, stdev=65.66, samples=19 00:31:03.821 iops : min= 480, max= 532, avg=499.79, stdev=16.46, samples=19 00:31:03.821 lat (msec) : 20=1.00%, 50=98.18%, 100=0.82% 00:31:03.821 cpu : usr=98.63%, sys=1.01%, ctx=16, majf=0, minf=40 00:31:03.821 IO depths : 1=4.1%, 2=9.0%, 4=21.1%, 8=56.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:31:03.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.821 complete : 0=0.0%, 4=93.4%, 8=1.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.821 issued rwts: total=5001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.821 filename0: (groupid=0, jobs=1): err= 0: pid=2984876: Mon Jul 15 22:28:27 2024 00:31:03.821 read: IOPS=493, BW=1974KiB/s (2022kB/s)(19.3MiB/10016msec) 00:31:03.821 slat (usec): min=5, max=104, avg=20.17, stdev=14.36 00:31:03.821 clat (usec): min=10481, max=60397, avg=32246.76, stdev=4264.02 00:31:03.821 lat (usec): min=10490, max=60419, avg=32266.93, stdev=4264.19 00:31:03.821 clat percentiles (usec): 00:31:03.821 | 1.00th=[19006], 5.00th=[28181], 10.00th=[30278], 20.00th=[31065], 00:31:03.821 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:03.821 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[39060], 00:31:03.821 | 99.00th=[50594], 99.50th=[51643], 99.90th=[58459], 99.95th=[60031], 00:31:03.821 | 99.99th=[60556] 00:31:03.821 bw ( KiB/s): min= 1792, max= 2152, per=4.14%, avg=1973.89, stdev=97.43, samples=19 00:31:03.821 iops : min= 448, max= 538, avg=493.47, stdev=24.36, samples=19 00:31:03.821 lat (msec) : 20=1.01%, 50=97.49%, 100=1.50% 00:31:03.821 cpu : usr=99.01%, sys=0.65%, ctx=22, majf=0, minf=54 00:31:03.821 IO depths : 1=4.4%, 2=8.9%, 4=20.7%, 8=57.3%, 16=8.7%, 32=0.0%, >=64=0.0% 00:31:03.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.821 complete : 0=0.0%, 4=93.2%, 8=1.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.821 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.821 filename0: (groupid=0, jobs=1): err= 0: pid=2984877: Mon Jul 15 22:28:27 2024 00:31:03.821 read: IOPS=493, BW=1974KiB/s (2021kB/s)(19.3MiB/10004msec) 00:31:03.821 slat (usec): min=5, max=140, avg=16.95, stdev=18.63 00:31:03.821 clat (usec): min=11066, max=62347, avg=32316.72, stdev=4415.52 00:31:03.821 lat (usec): min=11074, max=62368, avg=32333.67, stdev=4414.92 00:31:03.821 clat percentiles (usec): 00:31:03.821 | 1.00th=[20055], 5.00th=[27919], 10.00th=[30278], 20.00th=[31065], 00:31:03.821 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:03.821 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[39584], 00:31:03.821 | 99.00th=[49546], 99.50th=[52167], 99.90th=[62129], 99.95th=[62129], 00:31:03.821 | 99.99th=[62129] 00:31:03.821 bw ( KiB/s): min= 1768, max= 2048, per=4.14%, avg=1970.95, stdev=75.34, samples=19 00:31:03.821 iops : min= 442, max= 512, avg=492.74, stdev=18.84, samples=19 00:31:03.821 lat (msec) : 20=0.95%, 50=98.20%, 100=0.85% 00:31:03.821 cpu : usr=97.88%, sys=1.11%, ctx=43, majf=0, minf=47 00:31:03.821 IO depths : 1=2.8%, 2=6.0%, 4=15.3%, 8=64.7%, 16=11.3%, 32=0.0%, >=64=0.0% 00:31:03.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.821 complete : 0=0.0%, 4=92.1%, 8=3.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.821 issued rwts: total=4937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.821 filename1: (groupid=0, jobs=1): err= 0: pid=2984878: Mon Jul 15 22:28:27 2024 00:31:03.821 read: IOPS=461, BW=1846KiB/s (1890kB/s)(18.0MiB/10002msec) 00:31:03.821 slat (nsec): min=5440, max=96322, avg=16013.68, stdev=13249.60 00:31:03.821 clat (usec): min=6637, max=71119, avg=34588.00, stdev=6765.45 00:31:03.821 lat (usec): min=6643, max=71137, avg=34604.02, stdev=6765.04 00:31:03.821 clat percentiles (usec): 00:31:03.821 | 1.00th=[18744], 5.00th=[25035], 10.00th=[29492], 20.00th=[31065], 00:31:03.821 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32375], 60.00th=[33162], 00:31:03.821 | 70.00th=[34866], 80.00th=[40109], 90.00th=[44827], 95.00th=[48497], 00:31:03.821 | 99.00th=[52167], 99.50th=[52691], 99.90th=[70779], 99.95th=[70779], 00:31:03.821 | 99.99th=[70779] 00:31:03.821 bw ( KiB/s): min= 1466, max= 1968, per=3.87%, avg=1840.95, stdev=131.19, samples=19 00:31:03.821 iops : min= 366, max= 492, avg=460.21, stdev=32.88, samples=19 00:31:03.821 lat (msec) : 10=0.22%, 20=0.97%, 50=95.41%, 100=3.40% 00:31:03.821 cpu : usr=98.75%, sys=0.93%, ctx=17, majf=0, minf=45 00:31:03.821 IO depths : 1=0.2%, 2=0.4%, 4=8.6%, 8=76.2%, 16=14.6%, 32=0.0%, >=64=0.0% 00:31:03.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.821 complete : 0=0.0%, 4=90.6%, 8=6.0%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.821 issued rwts: total=4616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.821 filename1: (groupid=0, jobs=1): err= 0: pid=2984879: Mon Jul 15 22:28:27 2024 00:31:03.821 read: IOPS=487, BW=1948KiB/s (1995kB/s)(19.0MiB/10002msec) 00:31:03.821 slat (usec): min=5, max=106, avg=17.49, stdev=14.09 00:31:03.821 clat (usec): min=11568, max=55908, avg=32712.17, stdev=4884.32 00:31:03.821 lat (usec): min=11576, max=55925, avg=32729.66, stdev=4883.86 00:31:03.821 clat percentiles (usec): 00:31:03.821 | 1.00th=[18220], 5.00th=[27919], 10.00th=[30540], 20.00th=[31065], 00:31:03.821 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:03.821 | 70.00th=[32637], 80.00th=[33162], 90.00th=[36963], 95.00th=[42730], 00:31:03.821 | 99.00th=[50594], 99.50th=[52167], 99.90th=[55837], 99.95th=[55837], 00:31:03.821 | 99.99th=[55837] 00:31:03.821 bw ( KiB/s): min= 1720, max= 2048, per=4.08%, avg=1943.16, stdev=83.56, samples=19 00:31:03.821 iops : min= 430, max= 512, avg=485.79, stdev=20.89, samples=19 00:31:03.821 lat (msec) : 20=1.48%, 50=96.94%, 100=1.58% 00:31:03.821 cpu : usr=97.01%, sys=1.47%, ctx=79, majf=0, minf=45 00:31:03.821 IO depths : 1=3.3%, 2=6.8%, 4=17.3%, 8=62.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:31:03.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.821 complete : 0=0.0%, 4=92.4%, 8=3.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.821 issued rwts: total=4871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.821 filename1: (groupid=0, jobs=1): err= 0: pid=2984880: Mon Jul 15 22:28:27 2024 00:31:03.821 read: IOPS=516, BW=2066KiB/s (2115kB/s)(20.2MiB/10007msec) 00:31:03.821 slat (nsec): min=5435, max=98491, avg=9732.34, stdev=6955.04 00:31:03.821 clat (usec): min=2762, max=40210, avg=30897.10, stdev=4367.09 00:31:03.821 lat (usec): min=2771, max=40220, avg=30906.83, stdev=4366.92 00:31:03.821 clat percentiles (usec): 00:31:03.821 | 1.00th=[ 6652], 5.00th=[21890], 10.00th=[29754], 20.00th=[30802], 00:31:03.821 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:03.821 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[33817], 00:31:03.822 | 99.00th=[34866], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:31:03.822 | 99.99th=[40109] 00:31:03.822 bw ( KiB/s): min= 1920, max= 2432, per=4.34%, avg=2067.95, stdev=129.63, samples=19 00:31:03.822 iops : min= 480, max= 608, avg=516.95, stdev=32.38, samples=19 00:31:03.822 lat (msec) : 4=0.62%, 10=0.62%, 20=2.17%, 50=96.59% 00:31:03.822 cpu : usr=99.11%, sys=0.58%, ctx=17, majf=0, minf=45 00:31:03.822 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:03.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.822 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.822 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.822 filename1: (groupid=0, jobs=1): err= 0: pid=2984881: Mon Jul 15 22:28:27 2024 00:31:03.822 read: IOPS=504, BW=2016KiB/s (2065kB/s)(19.7MiB/10018msec) 00:31:03.822 slat (usec): min=5, max=108, avg=17.08, stdev=16.54 00:31:03.822 clat (usec): min=13150, max=74135, avg=31601.21, stdev=4537.44 00:31:03.822 lat (usec): min=13186, max=74153, avg=31618.29, stdev=4538.50 00:31:03.822 clat percentiles (usec): 00:31:03.822 | 1.00th=[17171], 5.00th=[22152], 10.00th=[29492], 20.00th=[30802], 00:31:03.822 | 30.00th=[31065], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:03.822 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[35390], 00:31:03.822 | 99.00th=[50594], 99.50th=[52167], 99.90th=[57410], 99.95th=[73925], 00:31:03.822 | 99.99th=[73925] 00:31:03.822 bw ( KiB/s): min= 1920, max= 2240, per=4.24%, avg=2018.53, stdev=96.00, samples=19 00:31:03.822 iops : min= 480, max= 560, avg=504.63, stdev=24.00, samples=19 00:31:03.822 lat (msec) : 20=2.26%, 50=96.71%, 100=1.03% 00:31:03.822 cpu : usr=98.05%, sys=1.08%, ctx=75, majf=0, minf=57 00:31:03.822 IO depths : 1=4.3%, 2=9.0%, 4=20.6%, 8=57.5%, 16=8.6%, 32=0.0%, >=64=0.0% 00:31:03.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.822 complete : 0=0.0%, 4=93.1%, 8=1.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.822 issued rwts: total=5050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.822 filename1: (groupid=0, jobs=1): err= 0: pid=2984882: Mon Jul 15 22:28:27 2024 00:31:03.822 read: IOPS=495, BW=1983KiB/s (2030kB/s)(19.4MiB/10003msec) 00:31:03.822 slat (usec): min=5, max=108, avg=20.35, stdev=14.72 00:31:03.822 clat (usec): min=3207, max=59078, avg=32122.46, stdev=4754.50 00:31:03.822 lat (usec): min=3212, max=59102, avg=32142.81, stdev=4754.94 00:31:03.822 clat percentiles (usec): 00:31:03.822 | 1.00th=[13566], 5.00th=[29754], 10.00th=[30278], 20.00th=[31065], 00:31:03.822 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:03.822 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[37487], 00:31:03.822 | 99.00th=[50594], 99.50th=[55313], 99.90th=[58983], 99.95th=[58983], 00:31:03.822 | 99.99th=[58983] 00:31:03.822 bw ( KiB/s): min= 1747, max= 2048, per=4.14%, avg=1973.21, stdev=82.00, samples=19 00:31:03.822 iops : min= 436, max= 512, avg=493.26, stdev=20.62, samples=19 00:31:03.822 lat (msec) : 4=0.32%, 20=1.82%, 50=96.45%, 100=1.41% 00:31:03.822 cpu : usr=98.97%, sys=0.66%, ctx=18, majf=0, minf=42 00:31:03.822 IO depths : 1=3.9%, 2=8.2%, 4=19.2%, 8=59.5%, 16=9.2%, 32=0.0%, >=64=0.0% 00:31:03.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.822 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.822 issued rwts: total=4958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.822 filename1: (groupid=0, jobs=1): err= 0: pid=2984883: Mon Jul 15 22:28:27 2024 00:31:03.822 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10015msec) 00:31:03.822 slat (usec): min=5, max=125, avg=22.91, stdev=18.63 00:31:03.822 clat (usec): min=10230, max=61153, avg=31781.49, stdev=4602.16 00:31:03.822 lat (usec): min=10260, max=61170, avg=31804.41, stdev=4602.43 00:31:03.822 clat percentiles (usec): 00:31:03.822 | 1.00th=[19006], 5.00th=[23725], 10.00th=[28705], 20.00th=[30540], 00:31:03.822 | 30.00th=[31065], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:03.822 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[39060], 00:31:03.822 | 99.00th=[50594], 99.50th=[52167], 99.90th=[54264], 99.95th=[61080], 00:31:03.822 | 99.99th=[61080] 00:31:03.822 bw ( KiB/s): min= 1864, max= 2112, per=4.21%, avg=2002.26, stdev=72.79, samples=19 00:31:03.822 iops : min= 466, max= 528, avg=500.53, stdev=18.24, samples=19 00:31:03.822 lat (msec) : 20=1.74%, 50=97.01%, 100=1.26% 00:31:03.822 cpu : usr=97.62%, sys=1.28%, ctx=120, majf=0, minf=47 00:31:03.822 IO depths : 1=3.3%, 2=6.9%, 4=17.9%, 8=61.8%, 16=10.1%, 32=0.0%, >=64=0.0% 00:31:03.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.822 complete : 0=0.0%, 4=92.6%, 8=2.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.822 issued rwts: total=5011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.822 filename1: (groupid=0, jobs=1): err= 0: pid=2984884: Mon Jul 15 22:28:27 2024 00:31:03.822 read: IOPS=496, BW=1986KiB/s (2034kB/s)(19.4MiB/10006msec) 00:31:03.822 slat (usec): min=5, max=110, avg=18.61, stdev=15.85 00:31:03.822 clat (usec): min=13692, max=54057, avg=32061.69, stdev=3981.59 00:31:03.822 lat (usec): min=13702, max=54081, avg=32080.30, stdev=3980.86 00:31:03.822 clat percentiles (usec): 00:31:03.822 | 1.00th=[20055], 5.00th=[27919], 10.00th=[30278], 20.00th=[31065], 00:31:03.822 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:03.822 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[38011], 00:31:03.822 | 99.00th=[49546], 99.50th=[51643], 99.90th=[54264], 99.95th=[54264], 00:31:03.822 | 99.99th=[54264] 00:31:03.822 bw ( KiB/s): min= 1888, max= 2072, per=4.17%, avg=1984.42, stdev=68.22, samples=19 00:31:03.822 iops : min= 472, max= 518, avg=496.11, stdev=17.06, samples=19 00:31:03.822 lat (msec) : 20=0.95%, 50=98.21%, 100=0.85% 00:31:03.822 cpu : usr=98.83%, sys=0.83%, ctx=18, majf=0, minf=46 00:31:03.822 IO depths : 1=4.4%, 2=8.9%, 4=21.3%, 8=56.8%, 16=8.6%, 32=0.0%, >=64=0.0% 00:31:03.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.822 complete : 0=0.0%, 4=93.3%, 8=1.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.822 issued rwts: total=4969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.822 filename1: (groupid=0, jobs=1): err= 0: pid=2984885: Mon Jul 15 22:28:27 2024 00:31:03.822 read: IOPS=494, BW=1978KiB/s (2026kB/s)(19.4MiB/10021msec) 00:31:03.822 slat (usec): min=5, max=110, avg=12.31, stdev=10.77 00:31:03.822 clat (usec): min=13557, max=56192, avg=32256.51, stdev=5425.88 00:31:03.822 lat (usec): min=13570, max=56199, avg=32268.82, stdev=5426.20 00:31:03.822 clat percentiles (usec): 00:31:03.822 | 1.00th=[17957], 5.00th=[22938], 10.00th=[25822], 20.00th=[30540], 00:31:03.822 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32375], 00:31:03.822 | 70.00th=[32900], 80.00th=[33424], 90.00th=[39060], 95.00th=[42730], 00:31:03.822 | 99.00th=[49021], 99.50th=[50594], 99.90th=[55313], 99.95th=[56361], 00:31:03.822 | 99.99th=[56361] 00:31:03.822 bw ( KiB/s): min= 1896, max= 2096, per=4.16%, avg=1978.40, stdev=57.40, samples=20 00:31:03.822 iops : min= 474, max= 524, avg=494.60, stdev=14.35, samples=20 00:31:03.822 lat (msec) : 20=2.20%, 50=97.01%, 100=0.79% 00:31:03.822 cpu : usr=98.70%, sys=0.96%, ctx=16, majf=0, minf=41 00:31:03.822 IO depths : 1=1.8%, 2=3.7%, 4=11.9%, 8=70.7%, 16=11.9%, 32=0.0%, >=64=0.0% 00:31:03.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.822 complete : 0=0.0%, 4=90.9%, 8=4.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.822 issued rwts: total=4956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.822 filename2: (groupid=0, jobs=1): err= 0: pid=2984886: Mon Jul 15 22:28:27 2024 00:31:03.822 read: IOPS=599, BW=2398KiB/s (2456kB/s)(23.4MiB/10010msec) 00:31:03.822 slat (nsec): min=5438, max=94630, avg=9071.44, stdev=6880.27 00:31:03.822 clat (usec): min=2252, max=48876, avg=26608.86, stdev=6072.68 00:31:03.822 lat (usec): min=2262, max=48899, avg=26617.94, stdev=6074.14 00:31:03.822 clat percentiles (usec): 00:31:03.822 | 1.00th=[ 4817], 5.00th=[18482], 10.00th=[19530], 20.00th=[21103], 00:31:03.822 | 30.00th=[21890], 40.00th=[23725], 50.00th=[29492], 60.00th=[31065], 00:31:03.822 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32375], 95.00th=[32900], 00:31:03.822 | 99.00th=[36963], 99.50th=[37487], 99.90th=[47973], 99.95th=[49021], 00:31:03.822 | 99.99th=[49021] 00:31:03.822 bw ( KiB/s): min= 1920, max= 2816, per=4.98%, avg=2372.21, stdev=267.40, samples=19 00:31:03.822 iops : min= 480, max= 704, avg=593.05, stdev=66.85, samples=19 00:31:03.822 lat (msec) : 4=0.97%, 10=0.37%, 20=10.93%, 50=87.74% 00:31:03.822 cpu : usr=99.02%, sys=0.65%, ctx=17, majf=0, minf=56 00:31:03.822 IO depths : 1=5.6%, 2=11.2%, 4=23.0%, 8=53.1%, 16=7.0%, 32=0.0%, >=64=0.0% 00:31:03.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.822 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.822 issued rwts: total=6002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.822 filename2: (groupid=0, jobs=1): err= 0: pid=2984887: Mon Jul 15 22:28:27 2024 00:31:03.822 read: IOPS=475, BW=1903KiB/s (1949kB/s)(18.6MiB/10006msec) 00:31:03.822 slat (usec): min=5, max=216, avg=21.42, stdev=17.97 00:31:03.822 clat (usec): min=9436, max=55651, avg=33489.75, stdev=6508.14 00:31:03.822 lat (usec): min=9446, max=55685, avg=33511.17, stdev=6507.08 00:31:03.822 clat percentiles (usec): 00:31:03.822 | 1.00th=[16712], 5.00th=[22676], 10.00th=[27395], 20.00th=[30540], 00:31:03.822 | 30.00th=[31327], 40.00th=[31851], 50.00th=[32113], 60.00th=[32637], 00:31:03.822 | 70.00th=[33817], 80.00th=[38536], 90.00th=[42730], 95.00th=[45876], 00:31:03.822 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53740], 99.95th=[55837], 00:31:03.822 | 99.99th=[55837] 00:31:03.822 bw ( KiB/s): min= 1664, max= 2160, per=3.99%, avg=1899.79, stdev=119.12, samples=19 00:31:03.822 iops : min= 416, max= 540, avg=474.95, stdev=29.78, samples=19 00:31:03.822 lat (msec) : 10=0.02%, 20=2.25%, 50=95.97%, 100=1.76% 00:31:03.822 cpu : usr=97.21%, sys=1.57%, ctx=75, majf=0, minf=54 00:31:03.822 IO depths : 1=1.2%, 2=3.4%, 4=13.2%, 8=68.8%, 16=13.4%, 32=0.0%, >=64=0.0% 00:31:03.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.822 complete : 0=0.0%, 4=92.0%, 8=3.8%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.822 issued rwts: total=4760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.822 filename2: (groupid=0, jobs=1): err= 0: pid=2984888: Mon Jul 15 22:28:27 2024 00:31:03.822 read: IOPS=501, BW=2004KiB/s (2052kB/s)(19.6MiB/10023msec) 00:31:03.822 slat (usec): min=5, max=126, avg=21.01, stdev=17.23 00:31:03.822 clat (usec): min=13767, max=60874, avg=31736.23, stdev=5087.75 00:31:03.822 lat (usec): min=13777, max=60891, avg=31757.25, stdev=5088.37 00:31:03.822 clat percentiles (usec): 00:31:03.822 | 1.00th=[19006], 5.00th=[21627], 10.00th=[25560], 20.00th=[30278], 00:31:03.822 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31851], 60.00th=[32113], 00:31:03.822 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34866], 95.00th=[41681], 00:31:03.822 | 99.00th=[49021], 99.50th=[50594], 99.90th=[53740], 99.95th=[60556], 00:31:03.823 | 99.99th=[61080] 00:31:03.823 bw ( KiB/s): min= 1872, max= 2192, per=4.21%, avg=2006.89, stdev=92.58, samples=19 00:31:03.823 iops : min= 468, max= 548, avg=501.68, stdev=23.18, samples=19 00:31:03.823 lat (msec) : 20=2.47%, 50=96.73%, 100=0.80% 00:31:03.823 cpu : usr=98.69%, sys=0.91%, ctx=93, majf=0, minf=41 00:31:03.823 IO depths : 1=2.3%, 2=6.0%, 4=17.0%, 8=63.4%, 16=11.4%, 32=0.0%, >=64=0.0% 00:31:03.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.823 complete : 0=0.0%, 4=92.4%, 8=2.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.823 issued rwts: total=5022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.823 filename2: (groupid=0, jobs=1): err= 0: pid=2984889: Mon Jul 15 22:28:27 2024 00:31:03.823 read: IOPS=482, BW=1930KiB/s (1977kB/s)(18.9MiB/10002msec) 00:31:03.823 slat (nsec): min=5444, max=98119, avg=19309.67, stdev=15346.14 00:31:03.823 clat (usec): min=7346, max=54945, avg=33014.64, stdev=5378.64 00:31:03.823 lat (usec): min=7352, max=54976, avg=33033.95, stdev=5377.87 00:31:03.823 clat percentiles (usec): 00:31:03.823 | 1.00th=[20579], 5.00th=[25822], 10.00th=[30016], 20.00th=[30802], 00:31:03.823 | 30.00th=[31327], 40.00th=[31589], 50.00th=[32113], 60.00th=[32375], 00:31:03.823 | 70.00th=[32900], 80.00th=[33424], 90.00th=[40633], 95.00th=[43779], 00:31:03.823 | 99.00th=[51119], 99.50th=[51643], 99.90th=[54789], 99.95th=[54789], 00:31:03.823 | 99.99th=[54789] 00:31:03.823 bw ( KiB/s): min= 1824, max= 2048, per=4.05%, avg=1926.05, stdev=57.85, samples=19 00:31:03.823 iops : min= 456, max= 512, avg=481.47, stdev=14.47, samples=19 00:31:03.823 lat (msec) : 10=0.21%, 20=0.58%, 50=97.41%, 100=1.80% 00:31:03.823 cpu : usr=98.88%, sys=0.79%, ctx=25, majf=0, minf=40 00:31:03.823 IO depths : 1=1.8%, 2=4.1%, 4=14.0%, 8=67.4%, 16=12.7%, 32=0.0%, >=64=0.0% 00:31:03.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.823 complete : 0=0.0%, 4=92.1%, 8=3.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.823 issued rwts: total=4827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.823 filename2: (groupid=0, jobs=1): err= 0: pid=2984890: Mon Jul 15 22:28:27 2024 00:31:03.823 read: IOPS=495, BW=1982KiB/s (2030kB/s)(19.4MiB/10001msec) 00:31:03.823 slat (usec): min=5, max=128, avg=13.30, stdev=12.06 00:31:03.823 clat (usec): min=10455, max=72884, avg=32180.59, stdev=4172.49 00:31:03.823 lat (usec): min=10463, max=72916, avg=32193.89, stdev=4172.87 00:31:03.823 clat percentiles (usec): 00:31:03.823 | 1.00th=[17957], 5.00th=[29492], 10.00th=[30540], 20.00th=[31065], 00:31:03.823 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:03.823 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[34866], 00:31:03.823 | 99.00th=[50594], 99.50th=[55837], 99.90th=[58459], 99.95th=[72877], 00:31:03.823 | 99.99th=[72877] 00:31:03.823 bw ( KiB/s): min= 1840, max= 2048, per=4.16%, avg=1978.95, stdev=66.41, samples=19 00:31:03.823 iops : min= 460, max= 512, avg=494.74, stdev=16.60, samples=19 00:31:03.823 lat (msec) : 20=1.29%, 50=97.20%, 100=1.51% 00:31:03.823 cpu : usr=98.61%, sys=0.76%, ctx=30, majf=0, minf=47 00:31:03.823 IO depths : 1=4.3%, 2=9.2%, 4=22.3%, 8=55.8%, 16=8.4%, 32=0.0%, >=64=0.0% 00:31:03.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.823 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.823 issued rwts: total=4956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.823 filename2: (groupid=0, jobs=1): err= 0: pid=2984891: Mon Jul 15 22:28:27 2024 00:31:03.823 read: IOPS=503, BW=2013KiB/s (2061kB/s)(19.7MiB/10021msec) 00:31:03.823 slat (usec): min=5, max=113, avg=16.28, stdev=15.29 00:31:03.823 clat (usec): min=14530, max=51414, avg=31662.28, stdev=2511.30 00:31:03.823 lat (usec): min=14536, max=51420, avg=31678.56, stdev=2511.30 00:31:03.823 clat percentiles (usec): 00:31:03.823 | 1.00th=[19792], 5.00th=[29230], 10.00th=[30278], 20.00th=[31065], 00:31:03.823 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:03.823 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:03.823 | 99.00th=[38536], 99.50th=[39584], 99.90th=[44303], 99.95th=[44827], 00:31:03.823 | 99.99th=[51643] 00:31:03.823 bw ( KiB/s): min= 1920, max= 2232, per=4.22%, avg=2010.80, stdev=87.80, samples=20 00:31:03.823 iops : min= 480, max= 558, avg=502.70, stdev=21.95, samples=20 00:31:03.823 lat (msec) : 20=1.13%, 50=98.83%, 100=0.04% 00:31:03.823 cpu : usr=98.85%, sys=0.78%, ctx=41, majf=0, minf=37 00:31:03.823 IO depths : 1=5.4%, 2=10.9%, 4=23.1%, 8=53.3%, 16=7.3%, 32=0.0%, >=64=0.0% 00:31:03.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.823 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.823 issued rwts: total=5043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.823 filename2: (groupid=0, jobs=1): err= 0: pid=2984892: Mon Jul 15 22:28:27 2024 00:31:03.823 read: IOPS=493, BW=1974KiB/s (2022kB/s)(19.3MiB/10027msec) 00:31:03.823 slat (usec): min=5, max=129, avg=17.98, stdev=17.38 00:31:03.823 clat (usec): min=16272, max=58141, avg=32266.56, stdev=5077.27 00:31:03.823 lat (usec): min=16278, max=58154, avg=32284.54, stdev=5076.80 00:31:03.823 clat percentiles (usec): 00:31:03.823 | 1.00th=[19006], 5.00th=[22938], 10.00th=[28967], 20.00th=[30802], 00:31:03.823 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:03.823 | 70.00th=[32637], 80.00th=[33162], 90.00th=[37487], 95.00th=[43254], 00:31:03.823 | 99.00th=[49021], 99.50th=[50594], 99.90th=[54264], 99.95th=[57934], 00:31:03.823 | 99.99th=[57934] 00:31:03.823 bw ( KiB/s): min= 1888, max= 2072, per=4.15%, avg=1975.60, stdev=57.77, samples=20 00:31:03.823 iops : min= 472, max= 518, avg=493.90, stdev=14.44, samples=20 00:31:03.823 lat (msec) : 20=2.10%, 50=97.03%, 100=0.87% 00:31:03.823 cpu : usr=98.78%, sys=0.84%, ctx=91, majf=0, minf=43 00:31:03.823 IO depths : 1=2.5%, 2=5.7%, 4=15.7%, 8=64.6%, 16=11.4%, 32=0.0%, >=64=0.0% 00:31:03.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.823 complete : 0=0.0%, 4=92.4%, 8=2.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.823 issued rwts: total=4949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.823 filename2: (groupid=0, jobs=1): err= 0: pid=2984893: Mon Jul 15 22:28:27 2024 00:31:03.823 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10004msec) 00:31:03.823 slat (nsec): min=5430, max=96705, avg=18612.53, stdev=14292.23 00:31:03.823 clat (usec): min=5665, max=54474, avg=33439.67, stdev=5675.31 00:31:03.823 lat (usec): min=5671, max=54514, avg=33458.28, stdev=5674.19 00:31:03.823 clat percentiles (usec): 00:31:03.823 | 1.00th=[18744], 5.00th=[26346], 10.00th=[30016], 20.00th=[30802], 00:31:03.823 | 30.00th=[31327], 40.00th=[31589], 50.00th=[32113], 60.00th=[32375], 00:31:03.823 | 70.00th=[32900], 80.00th=[34866], 90.00th=[42206], 95.00th=[45876], 00:31:03.823 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53216], 99.95th=[54264], 00:31:03.823 | 99.99th=[54264] 00:31:03.823 bw ( KiB/s): min= 1816, max= 2048, per=4.00%, avg=1904.42, stdev=60.78, samples=19 00:31:03.823 iops : min= 454, max= 512, avg=476.11, stdev=15.19, samples=19 00:31:03.823 lat (msec) : 10=0.13%, 20=1.22%, 50=97.15%, 100=1.51% 00:31:03.823 cpu : usr=99.00%, sys=0.65%, ctx=15, majf=0, minf=46 00:31:03.823 IO depths : 1=1.3%, 2=2.7%, 4=10.9%, 8=71.5%, 16=13.5%, 32=0.0%, >=64=0.0% 00:31:03.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.823 complete : 0=0.0%, 4=91.0%, 8=5.6%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.823 issued rwts: total=4769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.823 00:31:03.823 Run status group 0 (all jobs): 00:31:03.823 READ: bw=46.5MiB/s (48.7MB/s), 1846KiB/s-2398KiB/s (1890kB/s-2456kB/s), io=466MiB (489MB), run=10001-10027msec 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:03.823 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.824 bdev_null0 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.824 [2024-07-15 22:28:27.702791] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.824 bdev_null1 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:03.824 { 00:31:03.824 "params": { 00:31:03.824 "name": "Nvme$subsystem", 00:31:03.824 "trtype": "$TEST_TRANSPORT", 00:31:03.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:03.824 "adrfam": "ipv4", 00:31:03.824 "trsvcid": "$NVMF_PORT", 00:31:03.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:03.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:03.824 "hdgst": ${hdgst:-false}, 00:31:03.824 "ddgst": ${ddgst:-false} 00:31:03.824 }, 00:31:03.824 "method": "bdev_nvme_attach_controller" 00:31:03.824 } 00:31:03.824 EOF 00:31:03.824 )") 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:03.824 { 00:31:03.824 "params": { 00:31:03.824 "name": "Nvme$subsystem", 00:31:03.824 "trtype": "$TEST_TRANSPORT", 00:31:03.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:03.824 "adrfam": "ipv4", 00:31:03.824 "trsvcid": "$NVMF_PORT", 00:31:03.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:03.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:03.824 "hdgst": ${hdgst:-false}, 00:31:03.824 "ddgst": ${ddgst:-false} 00:31:03.824 }, 00:31:03.824 "method": "bdev_nvme_attach_controller" 00:31:03.824 } 00:31:03.824 EOF 00:31:03.824 )") 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:03.824 "params": { 00:31:03.824 "name": "Nvme0", 00:31:03.824 "trtype": "tcp", 00:31:03.824 "traddr": "10.0.0.2", 00:31:03.824 "adrfam": "ipv4", 00:31:03.824 "trsvcid": "4420", 00:31:03.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:03.824 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:03.824 "hdgst": false, 00:31:03.824 "ddgst": false 00:31:03.824 }, 00:31:03.824 "method": "bdev_nvme_attach_controller" 00:31:03.824 },{ 00:31:03.824 "params": { 00:31:03.824 "name": "Nvme1", 00:31:03.824 "trtype": "tcp", 00:31:03.824 "traddr": "10.0.0.2", 00:31:03.824 "adrfam": "ipv4", 00:31:03.824 "trsvcid": "4420", 00:31:03.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:03.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:03.824 "hdgst": false, 00:31:03.824 "ddgst": false 00:31:03.824 }, 00:31:03.824 "method": "bdev_nvme_attach_controller" 00:31:03.824 }' 00:31:03.824 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:03.825 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:03.825 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.825 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.825 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:03.825 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:03.825 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:03.825 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:03.825 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:03.825 22:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.825 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:03.825 ... 00:31:03.825 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:03.825 ... 00:31:03.825 fio-3.35 00:31:03.825 Starting 4 threads 00:31:03.825 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.111 00:31:09.111 filename0: (groupid=0, jobs=1): err= 0: pid=2987390: Mon Jul 15 22:28:33 2024 00:31:09.111 read: IOPS=2077, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5003msec) 00:31:09.111 slat (nsec): min=5399, max=52859, avg=7455.49, stdev=2831.41 00:31:09.111 clat (usec): min=1758, max=7346, avg=3831.12, stdev=634.00 00:31:09.111 lat (usec): min=1766, max=7354, avg=3838.58, stdev=633.84 00:31:09.111 clat percentiles (usec): 00:31:09.111 | 1.00th=[ 2376], 5.00th=[ 2835], 10.00th=[ 3064], 20.00th=[ 3294], 00:31:09.111 | 30.00th=[ 3490], 40.00th=[ 3654], 50.00th=[ 3785], 60.00th=[ 3916], 00:31:09.111 | 70.00th=[ 4146], 80.00th=[ 4359], 90.00th=[ 4686], 95.00th=[ 4948], 00:31:09.111 | 99.00th=[ 5473], 99.50th=[ 5735], 99.90th=[ 6063], 99.95th=[ 6194], 00:31:09.111 | 99.99th=[ 7308] 00:31:09.111 bw ( KiB/s): min=16208, max=17456, per=25.35%, avg=16604.44, stdev=352.89, samples=9 00:31:09.111 iops : min= 2026, max= 2182, avg=2075.56, stdev=44.11, samples=9 00:31:09.111 lat (msec) : 2=0.03%, 4=62.49%, 10=37.48% 00:31:09.111 cpu : usr=96.34%, sys=3.40%, ctx=11, majf=0, minf=47 00:31:09.111 IO depths : 1=0.3%, 2=1.6%, 4=69.1%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:09.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.111 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.111 issued rwts: total=10392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.111 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:09.111 filename0: (groupid=0, jobs=1): err= 0: pid=2987391: Mon Jul 15 22:28:33 2024 00:31:09.111 read: IOPS=2049, BW=16.0MiB/s (16.8MB/s)(80.1MiB/5002msec) 00:31:09.111 slat (nsec): min=5404, max=43928, avg=7559.42, stdev=2796.16 00:31:09.111 clat (usec): min=1665, max=6828, avg=3882.87, stdev=681.50 00:31:09.111 lat (usec): min=1671, max=6837, avg=3890.43, stdev=681.37 00:31:09.111 clat percentiles (usec): 00:31:09.111 | 1.00th=[ 2376], 5.00th=[ 2900], 10.00th=[ 3097], 20.00th=[ 3326], 00:31:09.111 | 30.00th=[ 3490], 40.00th=[ 3687], 50.00th=[ 3818], 60.00th=[ 3949], 00:31:09.111 | 70.00th=[ 4178], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5145], 00:31:09.111 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 6456], 99.95th=[ 6718], 00:31:09.111 | 99.99th=[ 6849] 00:31:09.111 bw ( KiB/s): min=16048, max=16960, per=25.05%, avg=16408.89, stdev=264.26, samples=9 00:31:09.111 iops : min= 2006, max= 2120, avg=2051.11, stdev=33.03, samples=9 00:31:09.111 lat (msec) : 2=0.20%, 4=60.94%, 10=38.86% 00:31:09.111 cpu : usr=96.74%, sys=2.98%, ctx=13, majf=0, minf=39 00:31:09.111 IO depths : 1=0.4%, 2=2.1%, 4=68.7%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:09.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.111 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.111 issued rwts: total=10251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.111 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:09.111 filename1: (groupid=0, jobs=1): err= 0: pid=2987392: Mon Jul 15 22:28:33 2024 00:31:09.111 read: IOPS=2070, BW=16.2MiB/s (17.0MB/s)(81.6MiB/5043msec) 00:31:09.111 slat (nsec): min=5407, max=46102, avg=7369.32, stdev=2716.03 00:31:09.111 clat (usec): min=1731, max=45611, avg=3831.65, stdev=1582.27 00:31:09.111 lat (usec): min=1740, max=45617, avg=3839.02, stdev=1582.46 00:31:09.111 clat percentiles (usec): 00:31:09.111 | 1.00th=[ 2507], 5.00th=[ 2868], 10.00th=[ 3032], 20.00th=[ 3261], 00:31:09.111 | 30.00th=[ 3425], 40.00th=[ 3556], 50.00th=[ 3752], 60.00th=[ 3851], 00:31:09.111 | 70.00th=[ 4080], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5014], 00:31:09.111 | 99.00th=[ 5473], 99.50th=[ 5800], 99.90th=[42730], 99.95th=[45351], 00:31:09.111 | 99.99th=[45351] 00:31:09.111 bw ( KiB/s): min=15264, max=17488, per=25.50%, avg=16702.40, stdev=619.15, samples=10 00:31:09.111 iops : min= 1908, max= 2186, avg=2087.80, stdev=77.39, samples=10 00:31:09.111 lat (msec) : 2=0.03%, 4=67.51%, 10=32.33%, 50=0.12% 00:31:09.111 cpu : usr=96.69%, sys=3.03%, ctx=11, majf=0, minf=51 00:31:09.111 IO depths : 1=0.4%, 2=2.0%, 4=68.1%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:09.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.111 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.111 issued rwts: total=10444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.111 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:09.111 filename1: (groupid=0, jobs=1): err= 0: pid=2987393: Mon Jul 15 22:28:33 2024 00:31:09.111 read: IOPS=2040, BW=15.9MiB/s (16.7MB/s)(79.7MiB/5002msec) 00:31:09.111 slat (nsec): min=5406, max=40475, avg=6074.94, stdev=1795.15 00:31:09.111 clat (usec): min=1623, max=46022, avg=3904.16, stdev=1347.38 00:31:09.111 lat (usec): min=1629, max=46062, avg=3910.24, stdev=1347.58 00:31:09.111 clat percentiles (usec): 00:31:09.111 | 1.00th=[ 2573], 5.00th=[ 2900], 10.00th=[ 3097], 20.00th=[ 3326], 00:31:09.111 | 30.00th=[ 3490], 40.00th=[ 3654], 50.00th=[ 3818], 60.00th=[ 3982], 00:31:09.111 | 70.00th=[ 4178], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5080], 00:31:09.111 | 99.00th=[ 5604], 99.50th=[ 5866], 99.90th=[ 7046], 99.95th=[45876], 00:31:09.111 | 99.99th=[45876] 00:31:09.111 bw ( KiB/s): min=14784, max=16720, per=24.92%, avg=16321.78, stdev=588.39, samples=9 00:31:09.111 iops : min= 1848, max= 2090, avg=2040.22, stdev=73.55, samples=9 00:31:09.111 lat (msec) : 2=0.07%, 4=61.02%, 10=38.83%, 50=0.08% 00:31:09.111 cpu : usr=96.94%, sys=2.80%, ctx=9, majf=0, minf=39 00:31:09.111 IO depths : 1=0.3%, 2=2.0%, 4=68.6%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:09.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.111 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.111 issued rwts: total=10205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.111 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:09.111 00:31:09.111 Run status group 0 (all jobs): 00:31:09.111 READ: bw=64.0MiB/s (67.1MB/s), 15.9MiB/s-16.2MiB/s (16.7MB/s-17.0MB/s), io=323MiB (338MB), run=5002-5043msec 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.111 00:31:09.111 real 0m24.575s 00:31:09.111 user 5m13.458s 00:31:09.111 sys 0m4.477s 00:31:09.111 22:28:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:09.112 22:28:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:09.112 ************************************ 00:31:09.112 END TEST fio_dif_rand_params 00:31:09.112 ************************************ 00:31:09.112 22:28:34 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:09.112 22:28:34 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:09.112 22:28:34 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:09.112 22:28:34 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:09.112 22:28:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:09.112 ************************************ 00:31:09.112 START TEST fio_dif_digest 00:31:09.112 ************************************ 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:09.112 bdev_null0 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:09.112 [2024-07-15 22:28:34.290233] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:09.112 { 00:31:09.112 "params": { 00:31:09.112 "name": "Nvme$subsystem", 00:31:09.112 "trtype": "$TEST_TRANSPORT", 00:31:09.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:09.112 "adrfam": "ipv4", 00:31:09.112 "trsvcid": "$NVMF_PORT", 00:31:09.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:09.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:09.112 "hdgst": ${hdgst:-false}, 00:31:09.112 "ddgst": ${ddgst:-false} 00:31:09.112 }, 00:31:09.112 "method": "bdev_nvme_attach_controller" 00:31:09.112 } 00:31:09.112 EOF 00:31:09.112 )") 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:09.112 "params": { 00:31:09.112 "name": "Nvme0", 00:31:09.112 "trtype": "tcp", 00:31:09.112 "traddr": "10.0.0.2", 00:31:09.112 "adrfam": "ipv4", 00:31:09.112 "trsvcid": "4420", 00:31:09.112 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:09.112 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:09.112 "hdgst": true, 00:31:09.112 "ddgst": true 00:31:09.112 }, 00:31:09.112 "method": "bdev_nvme_attach_controller" 00:31:09.112 }' 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:09.112 22:28:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.680 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:09.680 ... 00:31:09.680 fio-3.35 00:31:09.680 Starting 3 threads 00:31:09.681 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.914 00:31:21.914 filename0: (groupid=0, jobs=1): err= 0: pid=2988595: Mon Jul 15 22:28:45 2024 00:31:21.914 read: IOPS=174, BW=21.8MiB/s (22.8MB/s)(219MiB/10047msec) 00:31:21.914 slat (nsec): min=5709, max=37307, avg=7037.20, stdev=1556.54 00:31:21.914 clat (usec): min=7217, max=96595, avg=17221.22, stdev=13036.14 00:31:21.914 lat (usec): min=7223, max=96604, avg=17228.25, stdev=13036.09 00:31:21.914 clat percentiles (usec): 00:31:21.914 | 1.00th=[ 7963], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[11207], 00:31:21.914 | 30.00th=[11994], 40.00th=[12911], 50.00th=[13566], 60.00th=[14222], 00:31:21.914 | 70.00th=[14746], 80.00th=[15401], 90.00th=[49021], 95.00th=[54264], 00:31:21.914 | 99.00th=[56361], 99.50th=[57410], 99.90th=[94897], 99.95th=[96994], 00:31:21.914 | 99.99th=[96994] 00:31:21.914 bw ( KiB/s): min=11520, max=31488, per=33.60%, avg=22348.80, stdev=4798.75, samples=20 00:31:21.914 iops : min= 90, max= 246, avg=174.60, stdev=37.49, samples=20 00:31:21.914 lat (msec) : 10=9.83%, 20=80.16%, 50=0.17%, 100=9.83% 00:31:21.914 cpu : usr=95.81%, sys=3.91%, ctx=21, majf=0, minf=49 00:31:21.914 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.914 issued rwts: total=1749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.914 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:21.914 filename0: (groupid=0, jobs=1): err= 0: pid=2988596: Mon Jul 15 22:28:45 2024 00:31:21.914 read: IOPS=159, BW=19.9MiB/s (20.9MB/s)(200MiB/10044msec) 00:31:21.914 slat (nsec): min=5731, max=53341, avg=7206.23, stdev=2041.77 00:31:21.914 clat (msec): min=5, max=135, avg=18.77, stdev=16.24 00:31:21.914 lat (msec): min=5, max=135, avg=18.78, stdev=16.24 00:31:21.914 clat percentiles (msec): 00:31:21.914 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:31:21.914 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 15], 00:31:21.914 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 52], 95.00th=[ 55], 00:31:21.914 | 99.00th=[ 92], 99.50th=[ 95], 99.90th=[ 136], 99.95th=[ 136], 00:31:21.914 | 99.99th=[ 136] 00:31:21.914 bw ( KiB/s): min=15616, max=27136, per=30.79%, avg=20480.00, stdev=2947.07, samples=20 00:31:21.915 iops : min= 122, max= 212, avg=160.00, stdev=23.02, samples=20 00:31:21.915 lat (msec) : 10=19.16%, 20=67.42%, 50=2.25%, 100=11.05%, 250=0.12% 00:31:21.915 cpu : usr=96.61%, sys=3.12%, ctx=19, majf=0, minf=179 00:31:21.915 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.915 issued rwts: total=1602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.915 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:21.915 filename0: (groupid=0, jobs=1): err= 0: pid=2988597: Mon Jul 15 22:28:45 2024 00:31:21.915 read: IOPS=186, BW=23.3MiB/s (24.4MB/s)(234MiB/10050msec) 00:31:21.915 slat (nsec): min=5786, max=37578, avg=6657.99, stdev=1158.44 00:31:21.915 clat (msec): min=5, max=134, avg=16.08, stdev=12.44 00:31:21.915 lat (msec): min=5, max=134, avg=16.09, stdev=12.44 00:31:21.915 clat percentiles (msec): 00:31:21.915 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:31:21.915 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:31:21.915 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 17], 95.00th=[ 54], 00:31:21.915 | 99.00th=[ 57], 99.50th=[ 58], 99.90th=[ 95], 99.95th=[ 136], 00:31:21.915 | 99.99th=[ 136] 00:31:21.915 bw ( KiB/s): min=15360, max=29952, per=35.95%, avg=23910.40, stdev=4246.82, samples=20 00:31:21.915 iops : min= 120, max= 234, avg=186.80, stdev=33.18, samples=20 00:31:21.915 lat (msec) : 10=14.00%, 20=78.03%, 100=7.91%, 250=0.05% 00:31:21.915 cpu : usr=95.93%, sys=3.80%, ctx=21, majf=0, minf=170 00:31:21.915 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.915 issued rwts: total=1871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.915 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:21.915 00:31:21.915 Run status group 0 (all jobs): 00:31:21.915 READ: bw=65.0MiB/s (68.1MB/s), 19.9MiB/s-23.3MiB/s (20.9MB/s-24.4MB/s), io=653MiB (684MB), run=10044-10050msec 00:31:21.915 22:28:45 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:21.915 22:28:45 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:21.915 22:28:45 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:21.915 22:28:45 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:21.915 22:28:45 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:21.915 22:28:45 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:21.915 22:28:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.915 22:28:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:21.915 22:28:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.915 22:28:45 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:21.915 22:28:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.915 22:28:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:21.915 22:28:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.915 00:31:21.915 real 0m11.118s 00:31:21.915 user 0m41.784s 00:31:21.915 sys 0m1.390s 00:31:21.915 22:28:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:21.915 22:28:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:21.915 ************************************ 00:31:21.915 END TEST fio_dif_digest 00:31:21.915 ************************************ 00:31:21.915 22:28:45 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:21.915 22:28:45 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:21.915 22:28:45 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:21.915 22:28:45 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:21.915 22:28:45 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:21.915 22:28:45 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:21.915 22:28:45 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:21.915 22:28:45 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:21.915 22:28:45 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:21.915 rmmod nvme_tcp 00:31:21.915 rmmod nvme_fabrics 00:31:21.915 rmmod nvme_keyring 00:31:21.915 22:28:45 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:21.915 22:28:45 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:21.915 22:28:45 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:21.915 22:28:45 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2978267 ']' 00:31:21.915 22:28:45 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2978267 00:31:21.915 22:28:45 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2978267 ']' 00:31:21.915 22:28:45 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2978267 00:31:21.915 22:28:45 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:31:21.915 22:28:45 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:21.915 22:28:45 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2978267 00:31:21.915 22:28:45 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:21.915 22:28:45 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:21.915 22:28:45 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2978267' 00:31:21.915 killing process with pid 2978267 00:31:21.915 22:28:45 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2978267 00:31:21.915 22:28:45 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2978267 00:31:21.915 22:28:45 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:21.915 22:28:45 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:23.822 Waiting for block devices as requested 00:31:23.822 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:23.822 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:23.822 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:23.822 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:24.081 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:24.081 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:24.081 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:24.341 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:24.341 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:24.600 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:24.600 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:24.600 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:24.600 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:24.860 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:24.860 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:24.860 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:24.860 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:25.120 22:28:50 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:25.120 22:28:50 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:25.120 22:28:50 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:25.120 22:28:50 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:25.120 22:28:50 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.120 22:28:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:25.120 22:28:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.661 22:28:52 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:27.661 00:31:27.661 real 1m17.354s 00:31:27.661 user 7m54.845s 00:31:27.661 sys 0m19.566s 00:31:27.661 22:28:52 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:27.661 22:28:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:27.661 ************************************ 00:31:27.661 END TEST nvmf_dif 00:31:27.661 ************************************ 00:31:27.661 22:28:52 -- common/autotest_common.sh@1142 -- # return 0 00:31:27.661 22:28:52 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:27.661 22:28:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:27.661 22:28:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:27.661 22:28:52 -- common/autotest_common.sh@10 -- # set +x 00:31:27.661 ************************************ 00:31:27.661 START TEST nvmf_abort_qd_sizes 00:31:27.661 ************************************ 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:27.661 * Looking for test storage... 00:31:27.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.661 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:27.662 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:27.662 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:27.662 22:28:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:27.662 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:27.662 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.662 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:27.662 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:27.662 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:27.662 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.662 22:28:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:27.662 22:28:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.662 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:27.662 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:27.662 22:28:52 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:27.662 22:28:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:34.315 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:34.315 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:34.315 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:34.315 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:34.315 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:34.315 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:34.315 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:34.315 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:34.315 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:34.315 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:34.315 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:34.315 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:34.315 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:34.315 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:34.315 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:34.315 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:34.316 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:34.316 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:34.316 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:34.316 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:34.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:34.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:31:34.316 00:31:34.316 --- 10.0.0.2 ping statistics --- 00:31:34.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.316 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:34.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:34.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:31:34.316 00:31:34.316 --- 10.0.0.1 ping statistics --- 00:31:34.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.316 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:34.316 22:28:59 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:37.622 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:37.622 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:37.622 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:37.622 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:37.622 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:37.622 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:37.622 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:37.622 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:37.622 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:37.883 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:37.883 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:37.883 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:37.883 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:37.883 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:37.883 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:37.883 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:37.883 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2998016 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2998016 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2998016 ']' 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:38.143 22:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:38.404 [2024-07-15 22:29:03.491687] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:31:38.404 [2024-07-15 22:29:03.491735] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:38.404 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.404 [2024-07-15 22:29:03.558576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:38.404 [2024-07-15 22:29:03.626537] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:38.404 [2024-07-15 22:29:03.626576] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:38.404 [2024-07-15 22:29:03.626583] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:38.404 [2024-07-15 22:29:03.626589] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:38.404 [2024-07-15 22:29:03.626595] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:38.404 [2024-07-15 22:29:03.626740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.404 [2024-07-15 22:29:03.626854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:38.404 [2024-07-15 22:29:03.627008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.404 [2024-07-15 22:29:03.627010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:38.975 22:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:38.975 22:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:31:38.975 22:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:38.975 22:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:38.975 22:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:39.236 22:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:39.236 ************************************ 00:31:39.236 START TEST spdk_target_abort 00:31:39.236 ************************************ 00:31:39.236 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:31:39.236 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:39.236 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:39.236 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.236 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.498 spdk_targetn1 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.498 [2024-07-15 22:29:04.669274] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.498 [2024-07-15 22:29:04.709542] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:39.498 22:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:39.498 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.760 [2024-07-15 22:29:04.876465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:224 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:31:39.760 [2024-07-15 22:29:04.876493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:001f p:1 m:0 dnr:0 00:31:39.760 [2024-07-15 22:29:04.892588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:640 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:31:39.760 [2024-07-15 22:29:04.892605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0053 p:1 m:0 dnr:0 00:31:39.760 [2024-07-15 22:29:04.988575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2128 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:31:39.760 [2024-07-15 22:29:04.988594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:39.760 [2024-07-15 22:29:05.011611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2832 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:31:39.760 [2024-07-15 22:29:05.011628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:43.063 Initializing NVMe Controllers 00:31:43.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:43.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:43.063 Initialization complete. Launching workers. 00:31:43.063 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 7142, failed: 4 00:31:43.063 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1574, failed to submit 5572 00:31:43.063 success 618, unsuccess 956, failed 0 00:31:43.063 22:29:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:43.063 22:29:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:43.063 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.063 [2024-07-15 22:29:08.134800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200007c56000 PRP2 0x0 00:31:43.063 [2024-07-15 22:29:08.134844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:0023 p:1 m:0 dnr:0 00:31:43.063 [2024-07-15 22:29:08.151254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:480 len:8 PRP1 0x200007c3a000 PRP2 0x0 00:31:43.063 [2024-07-15 22:29:08.151279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:004d p:1 m:0 dnr:0 00:31:43.063 [2024-07-15 22:29:08.159110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:664 len:8 PRP1 0x200007c56000 PRP2 0x0 00:31:43.063 [2024-07-15 22:29:08.159141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0064 p:1 m:0 dnr:0 00:31:46.360 Initializing NVMe Controllers 00:31:46.360 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:46.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:46.360 Initialization complete. Launching workers. 00:31:46.360 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8550, failed: 3 00:31:46.360 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1193, failed to submit 7360 00:31:46.360 success 360, unsuccess 833, failed 0 00:31:46.360 22:29:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:46.360 22:29:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:46.360 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.659 Initializing NVMe Controllers 00:31:49.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:49.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:49.659 Initialization complete. Launching workers. 00:31:49.659 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42569, failed: 0 00:31:49.659 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2376, failed to submit 40193 00:31:49.659 success 579, unsuccess 1797, failed 0 00:31:49.659 22:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:49.659 22:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.659 22:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:49.659 22:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.659 22:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:49.659 22:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.659 22:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:51.045 22:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.045 22:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2998016 00:31:51.045 22:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2998016 ']' 00:31:51.045 22:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2998016 00:31:51.045 22:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:31:51.045 22:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:51.045 22:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2998016 00:31:51.045 22:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:51.045 22:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:51.045 22:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2998016' 00:31:51.045 killing process with pid 2998016 00:31:51.045 22:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2998016 00:31:51.045 22:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2998016 00:31:51.307 00:31:51.307 real 0m12.120s 00:31:51.307 user 0m49.069s 00:31:51.307 sys 0m2.006s 00:31:51.307 22:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:51.307 22:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:51.307 ************************************ 00:31:51.307 END TEST spdk_target_abort 00:31:51.307 ************************************ 00:31:51.307 22:29:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:31:51.307 22:29:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:51.307 22:29:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:51.307 22:29:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:51.307 22:29:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:51.307 ************************************ 00:31:51.307 START TEST kernel_target_abort 00:31:51.307 ************************************ 00:31:51.307 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:31:51.307 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:51.307 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:51.307 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.307 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.307 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.307 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.307 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.307 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.307 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.307 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.307 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.307 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:51.307 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:51.308 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:51.308 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:51.308 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:51.308 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:51.308 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:51.308 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:51.308 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:51.308 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:51.308 22:29:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:53.877 Waiting for block devices as requested 00:31:54.137 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:54.137 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:54.137 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:54.398 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:54.398 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:54.398 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:54.398 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:54.658 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:54.658 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:54.918 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:54.918 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:54.918 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:55.179 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:55.179 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:55.179 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:55.179 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:55.439 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:55.699 No valid GPT data, bailing 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:55.699 22:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:31:55.699 00:31:55.699 Discovery Log Number of Records 2, Generation counter 2 00:31:55.699 =====Discovery Log Entry 0====== 00:31:55.699 trtype: tcp 00:31:55.699 adrfam: ipv4 00:31:55.699 subtype: current discovery subsystem 00:31:55.699 treq: not specified, sq flow control disable supported 00:31:55.699 portid: 1 00:31:55.699 trsvcid: 4420 00:31:55.699 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:55.699 traddr: 10.0.0.1 00:31:55.699 eflags: none 00:31:55.699 sectype: none 00:31:55.699 =====Discovery Log Entry 1====== 00:31:55.699 trtype: tcp 00:31:55.699 adrfam: ipv4 00:31:55.699 subtype: nvme subsystem 00:31:55.699 treq: not specified, sq flow control disable supported 00:31:55.699 portid: 1 00:31:55.699 trsvcid: 4420 00:31:55.699 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:55.699 traddr: 10.0.0.1 00:31:55.699 eflags: none 00:31:55.699 sectype: none 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:55.699 22:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:55.958 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.277 Initializing NVMe Controllers 00:31:59.277 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:59.277 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:59.277 Initialization complete. Launching workers. 00:31:59.277 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 44678, failed: 0 00:31:59.277 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 44678, failed to submit 0 00:31:59.277 success 0, unsuccess 44678, failed 0 00:31:59.277 22:29:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:59.277 22:29:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:59.277 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.574 Initializing NVMe Controllers 00:32:02.574 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:02.574 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:02.574 Initialization complete. Launching workers. 00:32:02.574 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85566, failed: 0 00:32:02.574 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21550, failed to submit 64016 00:32:02.574 success 0, unsuccess 21550, failed 0 00:32:02.574 22:29:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:02.574 22:29:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:02.574 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.118 Initializing NVMe Controllers 00:32:05.118 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:05.118 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:05.118 Initialization complete. Launching workers. 00:32:05.118 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82436, failed: 0 00:32:05.118 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20590, failed to submit 61846 00:32:05.118 success 0, unsuccess 20590, failed 0 00:32:05.118 22:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:05.118 22:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:05.118 22:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:05.118 22:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:05.118 22:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:05.118 22:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:05.118 22:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:05.118 22:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:05.118 22:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:05.118 22:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:08.418 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:08.418 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:08.418 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:08.418 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:08.418 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:08.418 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:08.678 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:08.678 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:08.678 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:08.678 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:08.678 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:08.678 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:08.678 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:08.678 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:08.678 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:08.678 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:10.591 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:10.853 00:32:10.853 real 0m19.364s 00:32:10.853 user 0m7.516s 00:32:10.853 sys 0m6.002s 00:32:10.853 22:29:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:10.853 22:29:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:10.853 ************************************ 00:32:10.853 END TEST kernel_target_abort 00:32:10.853 ************************************ 00:32:10.853 22:29:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:10.853 22:29:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:10.853 22:29:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:10.853 22:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:10.853 22:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:10.853 22:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:10.853 22:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:10.853 22:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:10.853 22:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:10.853 rmmod nvme_tcp 00:32:10.853 rmmod nvme_fabrics 00:32:10.853 rmmod nvme_keyring 00:32:10.853 22:29:36 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:10.853 22:29:36 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:10.853 22:29:36 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:10.853 22:29:36 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2998016 ']' 00:32:10.853 22:29:36 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2998016 00:32:10.853 22:29:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2998016 ']' 00:32:10.853 22:29:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2998016 00:32:10.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2998016) - No such process 00:32:10.853 22:29:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2998016 is not found' 00:32:10.853 Process with pid 2998016 is not found 00:32:10.853 22:29:36 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:10.853 22:29:36 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:14.155 Waiting for block devices as requested 00:32:14.155 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:14.155 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:14.155 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:14.155 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:14.155 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:14.155 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:14.415 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:14.415 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:14.415 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:14.711 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:14.711 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:14.711 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:14.972 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:14.972 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:14.972 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:14.972 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:15.232 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:15.492 22:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:15.492 22:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:15.492 22:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:15.492 22:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:15.492 22:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.492 22:29:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:15.492 22:29:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.406 22:29:42 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:17.406 00:32:17.406 real 0m50.092s 00:32:17.406 user 1m1.559s 00:32:17.406 sys 0m18.241s 00:32:17.406 22:29:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:17.406 22:29:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:17.406 ************************************ 00:32:17.406 END TEST nvmf_abort_qd_sizes 00:32:17.406 ************************************ 00:32:17.406 22:29:42 -- common/autotest_common.sh@1142 -- # return 0 00:32:17.406 22:29:42 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:17.406 22:29:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:17.406 22:29:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:17.406 22:29:42 -- common/autotest_common.sh@10 -- # set +x 00:32:17.668 ************************************ 00:32:17.668 START TEST keyring_file 00:32:17.668 ************************************ 00:32:17.668 22:29:42 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:17.668 * Looking for test storage... 00:32:17.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:17.668 22:29:42 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:17.668 22:29:42 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.668 22:29:42 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.668 22:29:42 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.668 22:29:42 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.668 22:29:42 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.668 22:29:42 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.668 22:29:42 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:17.668 22:29:42 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:17.668 22:29:42 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:17.668 22:29:42 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:17.668 22:29:42 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:17.668 22:29:42 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:17.668 22:29:42 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:17.668 22:29:42 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CC3k54YuUs 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CC3k54YuUs 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CC3k54YuUs 00:32:17.668 22:29:42 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.CC3k54YuUs 00:32:17.668 22:29:42 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rEOhISAbE2 00:32:17.668 22:29:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:17.668 22:29:42 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:17.929 22:29:43 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rEOhISAbE2 00:32:17.929 22:29:43 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rEOhISAbE2 00:32:17.929 22:29:43 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.rEOhISAbE2 00:32:17.929 22:29:43 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:17.929 22:29:43 keyring_file -- keyring/file.sh@30 -- # tgtpid=3007971 00:32:17.929 22:29:43 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3007971 00:32:17.929 22:29:43 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3007971 ']' 00:32:17.929 22:29:43 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.929 22:29:43 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:17.929 22:29:43 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.929 22:29:43 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:17.929 22:29:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:17.929 [2024-07-15 22:29:43.056721] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:32:17.929 [2024-07-15 22:29:43.056816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007971 ] 00:32:17.929 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.929 [2024-07-15 22:29:43.124968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.929 [2024-07-15 22:29:43.200036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:18.868 22:29:43 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:18.868 [2024-07-15 22:29:43.855266] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.868 null0 00:32:18.868 [2024-07-15 22:29:43.887309] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:18.868 [2024-07-15 22:29:43.887564] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:18.868 [2024-07-15 22:29:43.895317] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.868 22:29:43 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:18.868 [2024-07-15 22:29:43.907350] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:18.868 request: 00:32:18.868 { 00:32:18.868 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:18.868 "secure_channel": false, 00:32:18.868 "listen_address": { 00:32:18.868 "trtype": "tcp", 00:32:18.868 "traddr": "127.0.0.1", 00:32:18.868 "trsvcid": "4420" 00:32:18.868 }, 00:32:18.868 "method": "nvmf_subsystem_add_listener", 00:32:18.868 "req_id": 1 00:32:18.868 } 00:32:18.868 Got JSON-RPC error response 00:32:18.868 response: 00:32:18.868 { 00:32:18.868 "code": -32602, 00:32:18.868 "message": "Invalid parameters" 00:32:18.868 } 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:18.868 22:29:43 keyring_file -- keyring/file.sh@46 -- # bperfpid=3008222 00:32:18.868 22:29:43 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3008222 /var/tmp/bperf.sock 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3008222 ']' 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:18.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:18.868 22:29:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:18.869 22:29:43 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:18.869 [2024-07-15 22:29:43.961403] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:32:18.869 [2024-07-15 22:29:43.961450] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3008222 ] 00:32:18.869 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.869 [2024-07-15 22:29:44.036534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.869 [2024-07-15 22:29:44.100613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:19.440 22:29:44 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:19.440 22:29:44 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:19.440 22:29:44 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CC3k54YuUs 00:32:19.440 22:29:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CC3k54YuUs 00:32:19.701 22:29:44 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rEOhISAbE2 00:32:19.701 22:29:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rEOhISAbE2 00:32:19.961 22:29:45 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:19.961 22:29:45 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:19.961 22:29:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.961 22:29:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:19.961 22:29:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.961 22:29:45 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.CC3k54YuUs == \/\t\m\p\/\t\m\p\.\C\C\3\k\5\4\Y\u\U\s ]] 00:32:19.961 22:29:45 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:19.961 22:29:45 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:19.961 22:29:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.961 22:29:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.961 22:29:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:20.221 22:29:45 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.rEOhISAbE2 == \/\t\m\p\/\t\m\p\.\r\E\O\h\I\S\A\b\E\2 ]] 00:32:20.221 22:29:45 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:20.221 22:29:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:20.221 22:29:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:20.221 22:29:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:20.221 22:29:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:20.221 22:29:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:20.221 22:29:45 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:20.221 22:29:45 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:20.221 22:29:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:20.221 22:29:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:20.221 22:29:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:20.221 22:29:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:20.221 22:29:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:20.481 22:29:45 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:20.482 22:29:45 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:20.482 22:29:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:20.482 [2024-07-15 22:29:45.797782] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:20.741 nvme0n1 00:32:20.741 22:29:45 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:20.741 22:29:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:20.741 22:29:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:20.741 22:29:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:20.741 22:29:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:20.741 22:29:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:20.741 22:29:46 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:20.741 22:29:46 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:20.741 22:29:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:20.741 22:29:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:20.741 22:29:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:20.741 22:29:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:20.741 22:29:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:21.002 22:29:46 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:21.002 22:29:46 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:21.002 Running I/O for 1 seconds... 00:32:22.385 00:32:22.385 Latency(us) 00:32:22.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.385 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:22.385 nvme0n1 : 1.02 7630.20 29.81 0.00 0.00 16615.05 4505.60 22719.15 00:32:22.385 =================================================================================================================== 00:32:22.385 Total : 7630.20 29.81 0.00 0.00 16615.05 4505.60 22719.15 00:32:22.385 0 00:32:22.385 22:29:47 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:22.385 22:29:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:22.385 22:29:47 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:22.385 22:29:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.385 22:29:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:22.385 22:29:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.385 22:29:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.385 22:29:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:22.385 22:29:47 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:22.385 22:29:47 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:22.385 22:29:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:22.385 22:29:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.385 22:29:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.385 22:29:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.385 22:29:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:22.646 22:29:47 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:22.646 22:29:47 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:22.646 22:29:47 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:22.646 22:29:47 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:22.646 22:29:47 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:22.646 22:29:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:22.646 22:29:47 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:22.646 22:29:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:22.646 22:29:47 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:22.646 22:29:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:22.907 [2024-07-15 22:29:47.977189] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:22.907 [2024-07-15 22:29:47.977208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe29d0 (107): Transport endpoint is not connected 00:32:22.907 [2024-07-15 22:29:47.978203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe29d0 (9): Bad file descriptor 00:32:22.907 [2024-07-15 22:29:47.979205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:22.907 [2024-07-15 22:29:47.979212] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:22.907 [2024-07-15 22:29:47.979218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:22.907 request: 00:32:22.907 { 00:32:22.907 "name": "nvme0", 00:32:22.907 "trtype": "tcp", 00:32:22.907 "traddr": "127.0.0.1", 00:32:22.907 "adrfam": "ipv4", 00:32:22.907 "trsvcid": "4420", 00:32:22.907 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:22.907 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:22.907 "prchk_reftag": false, 00:32:22.907 "prchk_guard": false, 00:32:22.907 "hdgst": false, 00:32:22.907 "ddgst": false, 00:32:22.907 "psk": "key1", 00:32:22.907 "method": "bdev_nvme_attach_controller", 00:32:22.907 "req_id": 1 00:32:22.907 } 00:32:22.907 Got JSON-RPC error response 00:32:22.907 response: 00:32:22.907 { 00:32:22.907 "code": -5, 00:32:22.907 "message": "Input/output error" 00:32:22.907 } 00:32:22.907 22:29:47 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:22.907 22:29:47 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:22.907 22:29:47 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:22.907 22:29:47 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:22.907 22:29:47 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:22.907 22:29:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:22.907 22:29:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.907 22:29:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.907 22:29:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:22.908 22:29:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.908 22:29:48 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:22.908 22:29:48 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:22.908 22:29:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:22.908 22:29:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.908 22:29:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:22.908 22:29:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.908 22:29:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.169 22:29:48 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:23.169 22:29:48 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:23.169 22:29:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:23.169 22:29:48 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:23.169 22:29:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:23.430 22:29:48 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:23.430 22:29:48 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:23.430 22:29:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.691 22:29:48 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:23.691 22:29:48 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.CC3k54YuUs 00:32:23.691 22:29:48 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.CC3k54YuUs 00:32:23.691 22:29:48 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:23.691 22:29:48 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.CC3k54YuUs 00:32:23.691 22:29:48 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:23.691 22:29:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:23.691 22:29:48 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:23.691 22:29:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:23.691 22:29:48 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CC3k54YuUs 00:32:23.691 22:29:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CC3k54YuUs 00:32:23.691 [2024-07-15 22:29:48.922979] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CC3k54YuUs': 0100660 00:32:23.691 [2024-07-15 22:29:48.922998] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:23.691 request: 00:32:23.691 { 00:32:23.691 "name": "key0", 00:32:23.691 "path": "/tmp/tmp.CC3k54YuUs", 00:32:23.691 "method": "keyring_file_add_key", 00:32:23.691 "req_id": 1 00:32:23.691 } 00:32:23.691 Got JSON-RPC error response 00:32:23.691 response: 00:32:23.691 { 00:32:23.691 "code": -1, 00:32:23.691 "message": "Operation not permitted" 00:32:23.691 } 00:32:23.691 22:29:48 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:23.691 22:29:48 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:23.691 22:29:48 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:23.691 22:29:48 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:23.691 22:29:48 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.CC3k54YuUs 00:32:23.691 22:29:48 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CC3k54YuUs 00:32:23.691 22:29:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CC3k54YuUs 00:32:23.952 22:29:49 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.CC3k54YuUs 00:32:23.952 22:29:49 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:23.952 22:29:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:23.952 22:29:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:23.952 22:29:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:23.952 22:29:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.952 22:29:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:23.952 22:29:49 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:23.952 22:29:49 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:23.952 22:29:49 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:23.952 22:29:49 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:23.952 22:29:49 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:23.952 22:29:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:23.952 22:29:49 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:23.952 22:29:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:23.953 22:29:49 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:23.953 22:29:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:24.214 [2024-07-15 22:29:49.400197] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.CC3k54YuUs': No such file or directory 00:32:24.214 [2024-07-15 22:29:49.400210] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:24.214 [2024-07-15 22:29:49.400230] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:24.214 [2024-07-15 22:29:49.400235] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:24.214 [2024-07-15 22:29:49.400239] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:24.214 request: 00:32:24.214 { 00:32:24.214 "name": "nvme0", 00:32:24.214 "trtype": "tcp", 00:32:24.214 "traddr": "127.0.0.1", 00:32:24.214 "adrfam": "ipv4", 00:32:24.214 "trsvcid": "4420", 00:32:24.214 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:24.214 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:24.214 "prchk_reftag": false, 00:32:24.214 "prchk_guard": false, 00:32:24.214 "hdgst": false, 00:32:24.214 "ddgst": false, 00:32:24.214 "psk": "key0", 00:32:24.214 "method": "bdev_nvme_attach_controller", 00:32:24.214 "req_id": 1 00:32:24.214 } 00:32:24.214 Got JSON-RPC error response 00:32:24.214 response: 00:32:24.214 { 00:32:24.214 "code": -19, 00:32:24.214 "message": "No such device" 00:32:24.214 } 00:32:24.214 22:29:49 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:24.214 22:29:49 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:24.214 22:29:49 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:24.214 22:29:49 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:24.214 22:29:49 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:24.214 22:29:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:24.475 22:29:49 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:24.475 22:29:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:24.475 22:29:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:24.475 22:29:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:24.475 22:29:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:24.475 22:29:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:24.475 22:29:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Fv0fXRr9DL 00:32:24.475 22:29:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:24.475 22:29:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:24.475 22:29:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:24.475 22:29:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:24.475 22:29:49 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:24.475 22:29:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:24.475 22:29:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:24.475 22:29:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Fv0fXRr9DL 00:32:24.475 22:29:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Fv0fXRr9DL 00:32:24.475 22:29:49 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.Fv0fXRr9DL 00:32:24.475 22:29:49 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Fv0fXRr9DL 00:32:24.475 22:29:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Fv0fXRr9DL 00:32:24.475 22:29:49 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:24.475 22:29:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:24.736 nvme0n1 00:32:24.736 22:29:50 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:24.736 22:29:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:24.736 22:29:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:24.736 22:29:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:24.736 22:29:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.736 22:29:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:24.997 22:29:50 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:24.997 22:29:50 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:24.997 22:29:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:25.283 22:29:50 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:25.283 22:29:50 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:25.283 22:29:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:25.283 22:29:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:25.283 22:29:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:25.283 22:29:50 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:25.283 22:29:50 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:25.283 22:29:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:25.283 22:29:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:25.283 22:29:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:25.283 22:29:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:25.283 22:29:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:25.544 22:29:50 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:25.544 22:29:50 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:25.544 22:29:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:25.544 22:29:50 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:25.544 22:29:50 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:25.544 22:29:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:25.804 22:29:50 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:25.804 22:29:50 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Fv0fXRr9DL 00:32:25.804 22:29:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Fv0fXRr9DL 00:32:26.065 22:29:51 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rEOhISAbE2 00:32:26.065 22:29:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rEOhISAbE2 00:32:26.065 22:29:51 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:26.065 22:29:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:26.325 nvme0n1 00:32:26.325 22:29:51 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:26.325 22:29:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:26.586 22:29:51 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:26.586 "subsystems": [ 00:32:26.586 { 00:32:26.586 "subsystem": "keyring", 00:32:26.586 "config": [ 00:32:26.586 { 00:32:26.586 "method": "keyring_file_add_key", 00:32:26.586 "params": { 00:32:26.586 "name": "key0", 00:32:26.586 "path": "/tmp/tmp.Fv0fXRr9DL" 00:32:26.586 } 00:32:26.586 }, 00:32:26.586 { 00:32:26.586 "method": "keyring_file_add_key", 00:32:26.586 "params": { 00:32:26.586 "name": "key1", 00:32:26.586 "path": "/tmp/tmp.rEOhISAbE2" 00:32:26.586 } 00:32:26.586 } 00:32:26.586 ] 00:32:26.586 }, 00:32:26.586 { 00:32:26.586 "subsystem": "iobuf", 00:32:26.586 "config": [ 00:32:26.586 { 00:32:26.586 "method": "iobuf_set_options", 00:32:26.586 "params": { 00:32:26.586 "small_pool_count": 8192, 00:32:26.586 "large_pool_count": 1024, 00:32:26.586 "small_bufsize": 8192, 00:32:26.586 "large_bufsize": 135168 00:32:26.586 } 00:32:26.586 } 00:32:26.586 ] 00:32:26.586 }, 00:32:26.586 { 00:32:26.586 "subsystem": "sock", 00:32:26.586 "config": [ 00:32:26.586 { 00:32:26.586 "method": "sock_set_default_impl", 00:32:26.586 "params": { 00:32:26.586 "impl_name": "posix" 00:32:26.586 } 00:32:26.586 }, 00:32:26.586 { 00:32:26.586 "method": "sock_impl_set_options", 00:32:26.586 "params": { 00:32:26.586 "impl_name": "ssl", 00:32:26.586 "recv_buf_size": 4096, 00:32:26.586 "send_buf_size": 4096, 00:32:26.586 "enable_recv_pipe": true, 00:32:26.586 "enable_quickack": false, 00:32:26.586 "enable_placement_id": 0, 00:32:26.586 "enable_zerocopy_send_server": true, 00:32:26.586 "enable_zerocopy_send_client": false, 00:32:26.586 "zerocopy_threshold": 0, 00:32:26.586 "tls_version": 0, 00:32:26.586 "enable_ktls": false 00:32:26.586 } 00:32:26.586 }, 00:32:26.586 { 00:32:26.586 "method": "sock_impl_set_options", 00:32:26.586 "params": { 00:32:26.586 "impl_name": "posix", 00:32:26.586 "recv_buf_size": 2097152, 00:32:26.586 "send_buf_size": 2097152, 00:32:26.586 "enable_recv_pipe": true, 00:32:26.586 "enable_quickack": false, 00:32:26.586 "enable_placement_id": 0, 00:32:26.586 "enable_zerocopy_send_server": true, 00:32:26.586 "enable_zerocopy_send_client": false, 00:32:26.586 "zerocopy_threshold": 0, 00:32:26.586 "tls_version": 0, 00:32:26.586 "enable_ktls": false 00:32:26.586 } 00:32:26.586 } 00:32:26.586 ] 00:32:26.586 }, 00:32:26.586 { 00:32:26.586 "subsystem": "vmd", 00:32:26.586 "config": [] 00:32:26.586 }, 00:32:26.586 { 00:32:26.586 "subsystem": "accel", 00:32:26.586 "config": [ 00:32:26.586 { 00:32:26.586 "method": "accel_set_options", 00:32:26.586 "params": { 00:32:26.586 "small_cache_size": 128, 00:32:26.586 "large_cache_size": 16, 00:32:26.586 "task_count": 2048, 00:32:26.586 "sequence_count": 2048, 00:32:26.586 "buf_count": 2048 00:32:26.586 } 00:32:26.586 } 00:32:26.586 ] 00:32:26.586 }, 00:32:26.586 { 00:32:26.586 "subsystem": "bdev", 00:32:26.586 "config": [ 00:32:26.586 { 00:32:26.586 "method": "bdev_set_options", 00:32:26.586 "params": { 00:32:26.586 "bdev_io_pool_size": 65535, 00:32:26.586 "bdev_io_cache_size": 256, 00:32:26.586 "bdev_auto_examine": true, 00:32:26.586 "iobuf_small_cache_size": 128, 00:32:26.586 "iobuf_large_cache_size": 16 00:32:26.586 } 00:32:26.586 }, 00:32:26.586 { 00:32:26.586 "method": "bdev_raid_set_options", 00:32:26.586 "params": { 00:32:26.586 "process_window_size_kb": 1024 00:32:26.586 } 00:32:26.586 }, 00:32:26.586 { 00:32:26.586 "method": "bdev_iscsi_set_options", 00:32:26.586 "params": { 00:32:26.586 "timeout_sec": 30 00:32:26.586 } 00:32:26.586 }, 00:32:26.586 { 00:32:26.586 "method": "bdev_nvme_set_options", 00:32:26.586 "params": { 00:32:26.586 "action_on_timeout": "none", 00:32:26.586 "timeout_us": 0, 00:32:26.586 "timeout_admin_us": 0, 00:32:26.586 "keep_alive_timeout_ms": 10000, 00:32:26.586 "arbitration_burst": 0, 00:32:26.586 "low_priority_weight": 0, 00:32:26.586 "medium_priority_weight": 0, 00:32:26.586 "high_priority_weight": 0, 00:32:26.586 "nvme_adminq_poll_period_us": 10000, 00:32:26.586 "nvme_ioq_poll_period_us": 0, 00:32:26.586 "io_queue_requests": 512, 00:32:26.586 "delay_cmd_submit": true, 00:32:26.586 "transport_retry_count": 4, 00:32:26.586 "bdev_retry_count": 3, 00:32:26.586 "transport_ack_timeout": 0, 00:32:26.586 "ctrlr_loss_timeout_sec": 0, 00:32:26.586 "reconnect_delay_sec": 0, 00:32:26.586 "fast_io_fail_timeout_sec": 0, 00:32:26.586 "disable_auto_failback": false, 00:32:26.586 "generate_uuids": false, 00:32:26.586 "transport_tos": 0, 00:32:26.586 "nvme_error_stat": false, 00:32:26.586 "rdma_srq_size": 0, 00:32:26.586 "io_path_stat": false, 00:32:26.586 "allow_accel_sequence": false, 00:32:26.586 "rdma_max_cq_size": 0, 00:32:26.586 "rdma_cm_event_timeout_ms": 0, 00:32:26.586 "dhchap_digests": [ 00:32:26.586 "sha256", 00:32:26.586 "sha384", 00:32:26.586 "sha512" 00:32:26.586 ], 00:32:26.586 "dhchap_dhgroups": [ 00:32:26.586 "null", 00:32:26.586 "ffdhe2048", 00:32:26.586 "ffdhe3072", 00:32:26.586 "ffdhe4096", 00:32:26.586 "ffdhe6144", 00:32:26.586 "ffdhe8192" 00:32:26.586 ] 00:32:26.586 } 00:32:26.586 }, 00:32:26.586 { 00:32:26.586 "method": "bdev_nvme_attach_controller", 00:32:26.586 "params": { 00:32:26.586 "name": "nvme0", 00:32:26.586 "trtype": "TCP", 00:32:26.586 "adrfam": "IPv4", 00:32:26.586 "traddr": "127.0.0.1", 00:32:26.586 "trsvcid": "4420", 00:32:26.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:26.586 "prchk_reftag": false, 00:32:26.586 "prchk_guard": false, 00:32:26.586 "ctrlr_loss_timeout_sec": 0, 00:32:26.586 "reconnect_delay_sec": 0, 00:32:26.587 "fast_io_fail_timeout_sec": 0, 00:32:26.587 "psk": "key0", 00:32:26.587 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:26.587 "hdgst": false, 00:32:26.587 "ddgst": false 00:32:26.587 } 00:32:26.587 }, 00:32:26.587 { 00:32:26.587 "method": "bdev_nvme_set_hotplug", 00:32:26.587 "params": { 00:32:26.587 "period_us": 100000, 00:32:26.587 "enable": false 00:32:26.587 } 00:32:26.587 }, 00:32:26.587 { 00:32:26.587 "method": "bdev_wait_for_examine" 00:32:26.587 } 00:32:26.587 ] 00:32:26.587 }, 00:32:26.587 { 00:32:26.587 "subsystem": "nbd", 00:32:26.587 "config": [] 00:32:26.587 } 00:32:26.587 ] 00:32:26.587 }' 00:32:26.587 22:29:51 keyring_file -- keyring/file.sh@114 -- # killprocess 3008222 00:32:26.587 22:29:51 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3008222 ']' 00:32:26.587 22:29:51 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3008222 00:32:26.587 22:29:51 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:26.587 22:29:51 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:26.587 22:29:51 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3008222 00:32:26.587 22:29:51 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:26.587 22:29:51 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:26.587 22:29:51 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3008222' 00:32:26.587 killing process with pid 3008222 00:32:26.587 22:29:51 keyring_file -- common/autotest_common.sh@967 -- # kill 3008222 00:32:26.587 Received shutdown signal, test time was about 1.000000 seconds 00:32:26.587 00:32:26.587 Latency(us) 00:32:26.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.587 =================================================================================================================== 00:32:26.587 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:26.587 22:29:51 keyring_file -- common/autotest_common.sh@972 -- # wait 3008222 00:32:26.587 22:29:51 keyring_file -- keyring/file.sh@117 -- # bperfpid=3009746 00:32:26.587 22:29:51 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3009746 /var/tmp/bperf.sock 00:32:26.587 22:29:51 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3009746 ']' 00:32:26.587 22:29:51 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:26.587 22:29:51 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:26.587 22:29:51 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:26.587 22:29:51 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:26.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:26.587 22:29:51 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:26.587 22:29:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:26.587 22:29:51 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:26.587 "subsystems": [ 00:32:26.587 { 00:32:26.587 "subsystem": "keyring", 00:32:26.587 "config": [ 00:32:26.587 { 00:32:26.587 "method": "keyring_file_add_key", 00:32:26.587 "params": { 00:32:26.587 "name": "key0", 00:32:26.587 "path": "/tmp/tmp.Fv0fXRr9DL" 00:32:26.587 } 00:32:26.587 }, 00:32:26.587 { 00:32:26.587 "method": "keyring_file_add_key", 00:32:26.587 "params": { 00:32:26.587 "name": "key1", 00:32:26.587 "path": "/tmp/tmp.rEOhISAbE2" 00:32:26.587 } 00:32:26.587 } 00:32:26.587 ] 00:32:26.587 }, 00:32:26.587 { 00:32:26.587 "subsystem": "iobuf", 00:32:26.587 "config": [ 00:32:26.587 { 00:32:26.587 "method": "iobuf_set_options", 00:32:26.587 "params": { 00:32:26.587 "small_pool_count": 8192, 00:32:26.587 "large_pool_count": 1024, 00:32:26.587 "small_bufsize": 8192, 00:32:26.587 "large_bufsize": 135168 00:32:26.587 } 00:32:26.587 } 00:32:26.587 ] 00:32:26.587 }, 00:32:26.587 { 00:32:26.587 "subsystem": "sock", 00:32:26.587 "config": [ 00:32:26.587 { 00:32:26.587 "method": "sock_set_default_impl", 00:32:26.587 "params": { 00:32:26.587 "impl_name": "posix" 00:32:26.587 } 00:32:26.587 }, 00:32:26.587 { 00:32:26.587 "method": "sock_impl_set_options", 00:32:26.587 "params": { 00:32:26.587 "impl_name": "ssl", 00:32:26.587 "recv_buf_size": 4096, 00:32:26.587 "send_buf_size": 4096, 00:32:26.587 "enable_recv_pipe": true, 00:32:26.587 "enable_quickack": false, 00:32:26.587 "enable_placement_id": 0, 00:32:26.587 "enable_zerocopy_send_server": true, 00:32:26.587 "enable_zerocopy_send_client": false, 00:32:26.587 "zerocopy_threshold": 0, 00:32:26.587 "tls_version": 0, 00:32:26.587 "enable_ktls": false 00:32:26.587 } 00:32:26.587 }, 00:32:26.587 { 00:32:26.587 "method": "sock_impl_set_options", 00:32:26.587 "params": { 00:32:26.587 "impl_name": "posix", 00:32:26.587 "recv_buf_size": 2097152, 00:32:26.587 "send_buf_size": 2097152, 00:32:26.587 "enable_recv_pipe": true, 00:32:26.587 "enable_quickack": false, 00:32:26.587 "enable_placement_id": 0, 00:32:26.587 "enable_zerocopy_send_server": true, 00:32:26.587 "enable_zerocopy_send_client": false, 00:32:26.587 "zerocopy_threshold": 0, 00:32:26.587 "tls_version": 0, 00:32:26.587 "enable_ktls": false 00:32:26.587 } 00:32:26.587 } 00:32:26.587 ] 00:32:26.587 }, 00:32:26.587 { 00:32:26.587 "subsystem": "vmd", 00:32:26.587 "config": [] 00:32:26.587 }, 00:32:26.587 { 00:32:26.587 "subsystem": "accel", 00:32:26.587 "config": [ 00:32:26.587 { 00:32:26.587 "method": "accel_set_options", 00:32:26.587 "params": { 00:32:26.587 "small_cache_size": 128, 00:32:26.587 "large_cache_size": 16, 00:32:26.587 "task_count": 2048, 00:32:26.587 "sequence_count": 2048, 00:32:26.587 "buf_count": 2048 00:32:26.587 } 00:32:26.587 } 00:32:26.587 ] 00:32:26.587 }, 00:32:26.587 { 00:32:26.587 "subsystem": "bdev", 00:32:26.587 "config": [ 00:32:26.587 { 00:32:26.587 "method": "bdev_set_options", 00:32:26.587 "params": { 00:32:26.587 "bdev_io_pool_size": 65535, 00:32:26.587 "bdev_io_cache_size": 256, 00:32:26.587 "bdev_auto_examine": true, 00:32:26.587 "iobuf_small_cache_size": 128, 00:32:26.587 "iobuf_large_cache_size": 16 00:32:26.587 } 00:32:26.587 }, 00:32:26.587 { 00:32:26.587 "method": "bdev_raid_set_options", 00:32:26.587 "params": { 00:32:26.587 "process_window_size_kb": 1024 00:32:26.587 } 00:32:26.587 }, 00:32:26.587 { 00:32:26.587 "method": "bdev_iscsi_set_options", 00:32:26.587 "params": { 00:32:26.587 "timeout_sec": 30 00:32:26.587 } 00:32:26.587 }, 00:32:26.587 { 00:32:26.587 "method": "bdev_nvme_set_options", 00:32:26.587 "params": { 00:32:26.587 "action_on_timeout": "none", 00:32:26.587 "timeout_us": 0, 00:32:26.587 "timeout_admin_us": 0, 00:32:26.587 "keep_alive_timeout_ms": 10000, 00:32:26.587 "arbitration_burst": 0, 00:32:26.587 "low_priority_weight": 0, 00:32:26.587 "medium_priority_weight": 0, 00:32:26.587 "high_priority_weight": 0, 00:32:26.587 "nvme_adminq_poll_period_us": 10000, 00:32:26.587 "nvme_ioq_poll_period_us": 0, 00:32:26.587 "io_queue_requests": 512, 00:32:26.588 "delay_cmd_submit": true, 00:32:26.588 "transport_retry_count": 4, 00:32:26.588 "bdev_retry_count": 3, 00:32:26.588 "transport_ack_timeout": 0, 00:32:26.588 "ctrlr_loss_timeout_sec": 0, 00:32:26.588 "reconnect_delay_sec": 0, 00:32:26.588 "fast_io_fail_timeout_sec": 0, 00:32:26.588 "disable_auto_failback": false, 00:32:26.588 "generate_uuids": false, 00:32:26.588 "transport_tos": 0, 00:32:26.588 "nvme_error_stat": false, 00:32:26.588 "rdma_srq_size": 0, 00:32:26.588 "io_path_stat": false, 00:32:26.588 "allow_accel_sequence": false, 00:32:26.588 "rdma_max_cq_size": 0, 00:32:26.588 "rdma_cm_event_timeout_ms": 0, 00:32:26.588 "dhchap_digests": [ 00:32:26.588 "sha256", 00:32:26.588 "sha384", 00:32:26.588 "sha512" 00:32:26.588 ], 00:32:26.588 "dhchap_dhgroups": [ 00:32:26.588 "null", 00:32:26.588 "ffdhe2048", 00:32:26.588 "ffdhe3072", 00:32:26.588 "ffdhe4096", 00:32:26.588 "ffdhe6144", 00:32:26.588 "ffdhe8192" 00:32:26.588 ] 00:32:26.588 } 00:32:26.588 }, 00:32:26.588 { 00:32:26.588 "method": "bdev_nvme_attach_controller", 00:32:26.588 "params": { 00:32:26.588 "name": "nvme0", 00:32:26.588 "trtype": "TCP", 00:32:26.588 "adrfam": "IPv4", 00:32:26.588 "traddr": "127.0.0.1", 00:32:26.588 "trsvcid": "4420", 00:32:26.588 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:26.588 "prchk_reftag": false, 00:32:26.588 "prchk_guard": false, 00:32:26.588 "ctrlr_loss_timeout_sec": 0, 00:32:26.588 "reconnect_delay_sec": 0, 00:32:26.588 "fast_io_fail_timeout_sec": 0, 00:32:26.588 "psk": "key0", 00:32:26.588 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:26.588 "hdgst": false, 00:32:26.588 "ddgst": false 00:32:26.588 } 00:32:26.588 }, 00:32:26.588 { 00:32:26.588 "method": "bdev_nvme_set_hotplug", 00:32:26.588 "params": { 00:32:26.588 "period_us": 100000, 00:32:26.588 "enable": false 00:32:26.588 } 00:32:26.588 }, 00:32:26.588 { 00:32:26.588 "method": "bdev_wait_for_examine" 00:32:26.588 } 00:32:26.588 ] 00:32:26.588 }, 00:32:26.588 { 00:32:26.588 "subsystem": "nbd", 00:32:26.588 "config": [] 00:32:26.588 } 00:32:26.588 ] 00:32:26.588 }' 00:32:26.849 [2024-07-15 22:29:51.949721] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:32:26.849 [2024-07-15 22:29:51.949777] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3009746 ] 00:32:26.849 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.849 [2024-07-15 22:29:52.024535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.849 [2024-07-15 22:29:52.078112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.110 [2024-07-15 22:29:52.219515] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:27.680 22:29:52 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:27.681 22:29:52 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:27.681 22:29:52 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:27.681 22:29:52 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:27.681 22:29:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.681 22:29:52 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:27.681 22:29:52 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:27.681 22:29:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:27.681 22:29:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:27.681 22:29:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:27.681 22:29:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:27.681 22:29:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.940 22:29:53 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:27.940 22:29:53 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:27.940 22:29:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:27.940 22:29:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:27.940 22:29:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:27.940 22:29:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:27.940 22:29:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.940 22:29:53 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:27.940 22:29:53 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:27.940 22:29:53 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:27.940 22:29:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:28.200 22:29:53 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:28.200 22:29:53 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:28.200 22:29:53 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Fv0fXRr9DL /tmp/tmp.rEOhISAbE2 00:32:28.200 22:29:53 keyring_file -- keyring/file.sh@20 -- # killprocess 3009746 00:32:28.200 22:29:53 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3009746 ']' 00:32:28.200 22:29:53 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3009746 00:32:28.200 22:29:53 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:28.200 22:29:53 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:28.200 22:29:53 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3009746 00:32:28.200 22:29:53 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:28.200 22:29:53 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:28.200 22:29:53 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3009746' 00:32:28.200 killing process with pid 3009746 00:32:28.200 22:29:53 keyring_file -- common/autotest_common.sh@967 -- # kill 3009746 00:32:28.200 Received shutdown signal, test time was about 1.000000 seconds 00:32:28.200 00:32:28.200 Latency(us) 00:32:28.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.200 =================================================================================================================== 00:32:28.200 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:28.200 22:29:53 keyring_file -- common/autotest_common.sh@972 -- # wait 3009746 00:32:28.461 22:29:53 keyring_file -- keyring/file.sh@21 -- # killprocess 3007971 00:32:28.461 22:29:53 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3007971 ']' 00:32:28.461 22:29:53 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3007971 00:32:28.461 22:29:53 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:28.461 22:29:53 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:28.461 22:29:53 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3007971 00:32:28.461 22:29:53 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:28.461 22:29:53 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:28.461 22:29:53 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3007971' 00:32:28.461 killing process with pid 3007971 00:32:28.461 22:29:53 keyring_file -- common/autotest_common.sh@967 -- # kill 3007971 00:32:28.461 [2024-07-15 22:29:53.586916] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:28.461 22:29:53 keyring_file -- common/autotest_common.sh@972 -- # wait 3007971 00:32:28.731 00:32:28.731 real 0m11.043s 00:32:28.731 user 0m25.628s 00:32:28.731 sys 0m2.598s 00:32:28.731 22:29:53 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:28.731 22:29:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:28.731 ************************************ 00:32:28.731 END TEST keyring_file 00:32:28.731 ************************************ 00:32:28.731 22:29:53 -- common/autotest_common.sh@1142 -- # return 0 00:32:28.731 22:29:53 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:28.731 22:29:53 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:28.731 22:29:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:28.731 22:29:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:28.731 22:29:53 -- common/autotest_common.sh@10 -- # set +x 00:32:28.731 ************************************ 00:32:28.731 START TEST keyring_linux 00:32:28.731 ************************************ 00:32:28.731 22:29:53 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:28.731 * Looking for test storage... 00:32:28.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:28.731 22:29:53 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:28.731 22:29:53 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:28.731 22:29:53 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:28.731 22:29:53 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:28.731 22:29:53 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:28.731 22:29:53 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.731 22:29:53 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.731 22:29:53 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.731 22:29:53 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:28.731 22:29:53 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:28.731 22:29:53 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:28.731 22:29:54 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:28.731 22:29:54 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:28.731 22:29:54 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:28.731 22:29:54 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:28.731 22:29:54 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:28.731 22:29:54 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:28.731 22:29:54 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:28.731 22:29:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:28.731 22:29:54 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:28.731 22:29:54 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:28.731 22:29:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:28.731 22:29:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:28.731 22:29:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:28.731 22:29:54 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:28.731 22:29:54 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.731 22:29:54 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:28.731 22:29:54 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:28.731 22:29:54 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:28.731 22:29:54 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:28.731 22:29:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:28.731 22:29:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:28.731 /tmp/:spdk-test:key0 00:32:28.731 22:29:54 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:28.731 22:29:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:28.731 22:29:54 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:28.731 22:29:54 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:28.731 22:29:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:28.731 22:29:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:28.731 22:29:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:28.993 22:29:54 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:28.993 22:29:54 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.993 22:29:54 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:28.993 22:29:54 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:28.993 22:29:54 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:28.993 22:29:54 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:28.993 22:29:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:28.993 22:29:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:28.993 /tmp/:spdk-test:key1 00:32:28.993 22:29:54 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3010343 00:32:28.993 22:29:54 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3010343 00:32:28.993 22:29:54 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:28.993 22:29:54 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3010343 ']' 00:32:28.993 22:29:54 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.993 22:29:54 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:28.993 22:29:54 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.993 22:29:54 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:28.993 22:29:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:28.993 [2024-07-15 22:29:54.156202] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:32:28.993 [2024-07-15 22:29:54.156270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010343 ] 00:32:28.993 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.993 [2024-07-15 22:29:54.218955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.993 [2024-07-15 22:29:54.284727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.253 22:29:54 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:29.253 22:29:54 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:29.253 22:29:54 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:29.253 22:29:54 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.253 22:29:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:29.253 [2024-07-15 22:29:54.467675] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:29.253 null0 00:32:29.253 [2024-07-15 22:29:54.499719] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:29.253 [2024-07-15 22:29:54.500095] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:29.253 22:29:54 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.253 22:29:54 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:29.253 572669102 00:32:29.253 22:29:54 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:29.253 295153795 00:32:29.253 22:29:54 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3010467 00:32:29.253 22:29:54 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3010467 /var/tmp/bperf.sock 00:32:29.253 22:29:54 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:29.253 22:29:54 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3010467 ']' 00:32:29.253 22:29:54 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:29.253 22:29:54 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:29.253 22:29:54 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:29.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:29.253 22:29:54 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:29.253 22:29:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:29.253 [2024-07-15 22:29:54.575948] Starting SPDK v24.09-pre git sha1 a940d3681 / DPDK 24.03.0 initialization... 00:32:29.253 [2024-07-15 22:29:54.575994] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010467 ] 00:32:29.552 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.552 [2024-07-15 22:29:54.650590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.552 [2024-07-15 22:29:54.703980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.125 22:29:55 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:30.125 22:29:55 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:30.125 22:29:55 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:30.125 22:29:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:30.386 22:29:55 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:30.386 22:29:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:30.386 22:29:55 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:30.386 22:29:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:30.659 [2024-07-15 22:29:55.806256] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:30.659 nvme0n1 00:32:30.659 22:29:55 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:30.659 22:29:55 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:30.659 22:29:55 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:30.659 22:29:55 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:30.659 22:29:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:30.659 22:29:55 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:30.919 22:29:56 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:30.919 22:29:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:30.919 22:29:56 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:30.919 22:29:56 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:30.919 22:29:56 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:30.919 22:29:56 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:30.919 22:29:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:30.919 22:29:56 keyring_linux -- keyring/linux.sh@25 -- # sn=572669102 00:32:30.919 22:29:56 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:30.919 22:29:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:30.919 22:29:56 keyring_linux -- keyring/linux.sh@26 -- # [[ 572669102 == \5\7\2\6\6\9\1\0\2 ]] 00:32:30.919 22:29:56 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 572669102 00:32:30.919 22:29:56 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:30.919 22:29:56 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:31.178 Running I/O for 1 seconds... 00:32:32.119 00:32:32.120 Latency(us) 00:32:32.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.120 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:32.120 nvme0n1 : 1.01 8948.94 34.96 0.00 0.00 14195.09 8465.07 23374.51 00:32:32.120 =================================================================================================================== 00:32:32.120 Total : 8948.94 34.96 0.00 0.00 14195.09 8465.07 23374.51 00:32:32.120 0 00:32:32.120 22:29:57 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:32.120 22:29:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:32.381 22:29:57 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:32.381 22:29:57 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:32.381 22:29:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:32.381 22:29:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:32.381 22:29:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:32.381 22:29:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.381 22:29:57 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:32.381 22:29:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:32.381 22:29:57 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:32.381 22:29:57 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:32.381 22:29:57 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:32:32.381 22:29:57 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:32.381 22:29:57 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:32.381 22:29:57 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:32.381 22:29:57 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:32.381 22:29:57 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:32.381 22:29:57 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:32.381 22:29:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:32.641 [2024-07-15 22:29:57.803656] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:32.641 [2024-07-15 22:29:57.804364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f9950 (107): Transport endpoint is not connected 00:32:32.642 [2024-07-15 22:29:57.805360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f9950 (9): Bad file descriptor 00:32:32.642 [2024-07-15 22:29:57.806361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:32.642 [2024-07-15 22:29:57.806367] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:32.642 [2024-07-15 22:29:57.806372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:32.642 request: 00:32:32.642 { 00:32:32.642 "name": "nvme0", 00:32:32.642 "trtype": "tcp", 00:32:32.642 "traddr": "127.0.0.1", 00:32:32.642 "adrfam": "ipv4", 00:32:32.642 "trsvcid": "4420", 00:32:32.642 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:32.642 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:32.642 "prchk_reftag": false, 00:32:32.642 "prchk_guard": false, 00:32:32.642 "hdgst": false, 00:32:32.642 "ddgst": false, 00:32:32.642 "psk": ":spdk-test:key1", 00:32:32.642 "method": "bdev_nvme_attach_controller", 00:32:32.642 "req_id": 1 00:32:32.642 } 00:32:32.642 Got JSON-RPC error response 00:32:32.642 response: 00:32:32.642 { 00:32:32.642 "code": -5, 00:32:32.642 "message": "Input/output error" 00:32:32.642 } 00:32:32.642 22:29:57 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:32:32.642 22:29:57 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:32.642 22:29:57 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:32.642 22:29:57 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:32.642 22:29:57 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:32.642 22:29:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:32.642 22:29:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:32.642 22:29:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:32.642 22:29:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:32.642 22:29:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:32.642 22:29:57 keyring_linux -- keyring/linux.sh@33 -- # sn=572669102 00:32:32.642 22:29:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 572669102 00:32:32.642 1 links removed 00:32:32.642 22:29:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:32.642 22:29:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:32.642 22:29:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:32.642 22:29:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:32.642 22:29:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:32.642 22:29:57 keyring_linux -- keyring/linux.sh@33 -- # sn=295153795 00:32:32.642 22:29:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 295153795 00:32:32.642 1 links removed 00:32:32.642 22:29:57 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3010467 00:32:32.642 22:29:57 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3010467 ']' 00:32:32.642 22:29:57 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3010467 00:32:32.642 22:29:57 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:32.642 22:29:57 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:32.642 22:29:57 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3010467 00:32:32.642 22:29:57 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:32.642 22:29:57 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:32.642 22:29:57 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3010467' 00:32:32.642 killing process with pid 3010467 00:32:32.642 22:29:57 keyring_linux -- common/autotest_common.sh@967 -- # kill 3010467 00:32:32.642 Received shutdown signal, test time was about 1.000000 seconds 00:32:32.642 00:32:32.642 Latency(us) 00:32:32.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.642 =================================================================================================================== 00:32:32.642 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:32.642 22:29:57 keyring_linux -- common/autotest_common.sh@972 -- # wait 3010467 00:32:32.902 22:29:57 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3010343 00:32:32.902 22:29:57 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3010343 ']' 00:32:32.902 22:29:58 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3010343 00:32:32.902 22:29:58 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:32.902 22:29:58 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:32.902 22:29:58 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3010343 00:32:32.902 22:29:58 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:32.902 22:29:58 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:32.902 22:29:58 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3010343' 00:32:32.902 killing process with pid 3010343 00:32:32.902 22:29:58 keyring_linux -- common/autotest_common.sh@967 -- # kill 3010343 00:32:32.902 22:29:58 keyring_linux -- common/autotest_common.sh@972 -- # wait 3010343 00:32:33.164 00:32:33.164 real 0m4.396s 00:32:33.164 user 0m7.805s 00:32:33.164 sys 0m1.225s 00:32:33.164 22:29:58 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:33.164 22:29:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:33.164 ************************************ 00:32:33.164 END TEST keyring_linux 00:32:33.164 ************************************ 00:32:33.164 22:29:58 -- common/autotest_common.sh@1142 -- # return 0 00:32:33.164 22:29:58 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:33.164 22:29:58 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:33.164 22:29:58 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:33.164 22:29:58 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:33.164 22:29:58 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:33.164 22:29:58 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:33.164 22:29:58 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:33.164 22:29:58 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:33.164 22:29:58 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:33.164 22:29:58 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:33.164 22:29:58 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:33.164 22:29:58 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:33.164 22:29:58 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:33.164 22:29:58 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:33.164 22:29:58 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:33.165 22:29:58 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:33.165 22:29:58 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:33.165 22:29:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:33.165 22:29:58 -- common/autotest_common.sh@10 -- # set +x 00:32:33.165 22:29:58 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:33.165 22:29:58 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:33.165 22:29:58 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:33.165 22:29:58 -- common/autotest_common.sh@10 -- # set +x 00:32:41.308 INFO: APP EXITING 00:32:41.308 INFO: killing all VMs 00:32:41.308 INFO: killing vhost app 00:32:41.308 WARN: no vhost pid file found 00:32:41.308 INFO: EXIT DONE 00:32:43.857 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:32:43.857 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:32:43.857 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:32:43.857 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:32:43.857 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:32:43.857 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:32:43.857 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:32:43.857 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:32:43.857 0000:65:00.0 (144d a80a): Already using the nvme driver 00:32:43.857 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:32:43.857 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:32:44.118 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:32:44.118 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:32:44.118 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:32:44.118 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:32:44.118 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:32:44.118 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:32:48.331 Cleaning 00:32:48.331 Removing: /var/run/dpdk/spdk0/config 00:32:48.331 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:48.331 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:48.331 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:48.331 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:48.331 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:48.331 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:48.331 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:48.331 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:48.331 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:48.331 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:48.331 Removing: /var/run/dpdk/spdk1/config 00:32:48.331 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:48.331 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:48.331 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:48.331 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:48.331 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:48.331 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:48.331 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:48.331 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:48.331 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:48.331 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:48.331 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:48.331 Removing: /var/run/dpdk/spdk2/config 00:32:48.331 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:48.331 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:48.331 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:48.331 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:48.331 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:48.331 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:48.331 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:48.331 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:48.331 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:48.331 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:48.331 Removing: /var/run/dpdk/spdk3/config 00:32:48.331 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:48.331 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:48.331 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:48.331 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:48.331 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:48.331 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:48.331 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:48.331 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:48.331 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:48.331 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:48.331 Removing: /var/run/dpdk/spdk4/config 00:32:48.331 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:48.331 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:48.331 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:48.331 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:48.331 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:48.331 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:48.331 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:48.331 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:48.331 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:48.331 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:48.331 Removing: /dev/shm/bdev_svc_trace.1 00:32:48.331 Removing: /dev/shm/nvmf_trace.0 00:32:48.331 Removing: /dev/shm/spdk_tgt_trace.pid2553048 00:32:48.331 Removing: /var/run/dpdk/spdk0 00:32:48.331 Removing: /var/run/dpdk/spdk1 00:32:48.331 Removing: /var/run/dpdk/spdk2 00:32:48.331 Removing: /var/run/dpdk/spdk3 00:32:48.331 Removing: /var/run/dpdk/spdk4 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2551570 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2553048 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2553830 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2554919 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2555208 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2556328 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2556385 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2556782 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2557959 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2558479 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2558858 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2559253 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2559653 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2560297 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2560628 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2560899 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2561273 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2562378 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2565916 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2566206 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2566516 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2566657 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2567036 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2567365 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2567742 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2567772 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2568116 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2568437 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2568494 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2568823 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2569265 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2569614 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2569896 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2570080 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2570254 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2570466 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2570788 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2570977 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2571207 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2571556 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2571906 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2572259 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2572421 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2572644 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2572998 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2573347 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2573694 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2573909 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2574109 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2574437 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2574790 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2575141 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2575387 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2575602 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2575898 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2576255 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2576344 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2576734 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2581178 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2635307 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2640339 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2652025 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2658403 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2663515 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2664196 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2671871 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2679052 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2679057 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2680065 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2681117 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2682227 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2682853 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2683003 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2683231 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2683411 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2683418 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2684425 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2685433 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2686438 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2687112 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2687123 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2687453 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2688878 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2690278 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2700285 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2700637 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2705424 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2712702 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2716042 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2728034 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2738623 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2740756 00:32:48.331 Removing: /var/run/dpdk/spdk_pid2741880 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2762092 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2766639 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2798679 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2803887 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2805848 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2808012 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2808349 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2808364 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2808704 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2809856 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2811906 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2812974 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2813681 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2816158 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2816971 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2817803 00:32:48.332 Removing: /var/run/dpdk/spdk_pid2822557 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2834695 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2839576 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2846805 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2848301 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2849869 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2855067 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2860506 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2869466 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2869569 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2874472 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2874638 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2874961 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2875453 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2875555 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2880991 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2881576 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2886986 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2890259 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2896715 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2903109 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2913030 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2922116 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2922165 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2944240 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2945006 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2945782 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2946559 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2947557 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2948307 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2948991 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2949673 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2954721 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2955050 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2962083 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2962456 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2965073 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2972629 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2972634 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2978495 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2980839 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2983206 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2984558 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2986919 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2988438 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2998351 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2998838 00:32:48.593 Removing: /var/run/dpdk/spdk_pid2999416 00:32:48.593 Removing: /var/run/dpdk/spdk_pid3002335 00:32:48.593 Removing: /var/run/dpdk/spdk_pid3002823 00:32:48.593 Removing: /var/run/dpdk/spdk_pid3003355 00:32:48.593 Removing: /var/run/dpdk/spdk_pid3007971 00:32:48.593 Removing: /var/run/dpdk/spdk_pid3008222 00:32:48.593 Removing: /var/run/dpdk/spdk_pid3009746 00:32:48.593 Removing: /var/run/dpdk/spdk_pid3010343 00:32:48.593 Removing: /var/run/dpdk/spdk_pid3010467 00:32:48.593 Clean 00:32:48.854 22:30:13 -- common/autotest_common.sh@1451 -- # return 0 00:32:48.854 22:30:13 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:48.854 22:30:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:48.854 22:30:13 -- common/autotest_common.sh@10 -- # set +x 00:32:48.854 22:30:14 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:48.854 22:30:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:48.854 22:30:14 -- common/autotest_common.sh@10 -- # set +x 00:32:48.854 22:30:14 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:48.854 22:30:14 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:48.854 22:30:14 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:48.854 22:30:14 -- spdk/autotest.sh@391 -- # hash lcov 00:32:48.854 22:30:14 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:48.854 22:30:14 -- spdk/autotest.sh@393 -- # hostname 00:32:48.854 22:30:14 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:49.115 geninfo: WARNING: invalid characters removed from testname! 00:33:15.762 22:30:38 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:16.022 22:30:41 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:17.931 22:30:42 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:19.310 22:30:44 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:21.219 22:30:46 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:22.600 22:30:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:23.986 22:30:49 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:23.986 22:30:49 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.986 22:30:49 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:23.986 22:30:49 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.986 22:30:49 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.986 22:30:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.986 22:30:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.986 22:30:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.986 22:30:49 -- paths/export.sh@5 -- $ export PATH 00:33:23.986 22:30:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.986 22:30:49 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:23.986 22:30:49 -- common/autobuild_common.sh@444 -- $ date +%s 00:33:23.986 22:30:49 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721075449.XXXXXX 00:33:23.986 22:30:49 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721075449.GwFAap 00:33:23.986 22:30:49 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:33:23.986 22:30:49 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:33:23.986 22:30:49 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:23.986 22:30:49 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:23.986 22:30:49 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:23.986 22:30:49 -- common/autobuild_common.sh@460 -- $ get_config_params 00:33:23.986 22:30:49 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:24.247 22:30:49 -- common/autotest_common.sh@10 -- $ set +x 00:33:24.247 22:30:49 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:24.247 22:30:49 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:33:24.247 22:30:49 -- pm/common@17 -- $ local monitor 00:33:24.247 22:30:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:24.247 22:30:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:24.247 22:30:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:24.247 22:30:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:24.247 22:30:49 -- pm/common@21 -- $ date +%s 00:33:24.247 22:30:49 -- pm/common@21 -- $ date +%s 00:33:24.247 22:30:49 -- pm/common@25 -- $ sleep 1 00:33:24.247 22:30:49 -- pm/common@21 -- $ date +%s 00:33:24.247 22:30:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721075449 00:33:24.247 22:30:49 -- pm/common@21 -- $ date +%s 00:33:24.247 22:30:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721075449 00:33:24.247 22:30:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721075449 00:33:24.247 22:30:49 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721075449 00:33:24.247 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721075449_collect-cpu-load.pm.log 00:33:24.247 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721075449_collect-vmstat.pm.log 00:33:24.247 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721075449_collect-cpu-temp.pm.log 00:33:24.247 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721075449_collect-bmc-pm.bmc.pm.log 00:33:25.189 22:30:50 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:33:25.189 22:30:50 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:25.189 22:30:50 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:25.189 22:30:50 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:25.189 22:30:50 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:25.189 22:30:50 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:25.189 22:30:50 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:25.189 22:30:50 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:25.189 22:30:50 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:25.189 22:30:50 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:25.189 22:30:50 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:25.189 22:30:50 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:25.189 22:30:50 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:25.189 22:30:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:25.189 22:30:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:25.189 22:30:50 -- pm/common@44 -- $ pid=3023457 00:33:25.189 22:30:50 -- pm/common@50 -- $ kill -TERM 3023457 00:33:25.189 22:30:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:25.189 22:30:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:25.189 22:30:50 -- pm/common@44 -- $ pid=3023458 00:33:25.189 22:30:50 -- pm/common@50 -- $ kill -TERM 3023458 00:33:25.189 22:30:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:25.189 22:30:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:25.189 22:30:50 -- pm/common@44 -- $ pid=3023460 00:33:25.189 22:30:50 -- pm/common@50 -- $ kill -TERM 3023460 00:33:25.189 22:30:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:25.189 22:30:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:25.189 22:30:50 -- pm/common@44 -- $ pid=3023487 00:33:25.189 22:30:50 -- pm/common@50 -- $ sudo -E kill -TERM 3023487 00:33:25.189 + [[ -n 2432000 ]] 00:33:25.189 + sudo kill 2432000 00:33:25.201 [Pipeline] } 00:33:25.221 [Pipeline] // stage 00:33:25.227 [Pipeline] } 00:33:25.247 [Pipeline] // timeout 00:33:25.253 [Pipeline] } 00:33:25.272 [Pipeline] // catchError 00:33:25.278 [Pipeline] } 00:33:25.297 [Pipeline] // wrap 00:33:25.302 [Pipeline] } 00:33:25.318 [Pipeline] // catchError 00:33:25.326 [Pipeline] stage 00:33:25.329 [Pipeline] { (Epilogue) 00:33:25.344 [Pipeline] catchError 00:33:25.346 [Pipeline] { 00:33:25.361 [Pipeline] echo 00:33:25.363 Cleanup processes 00:33:25.370 [Pipeline] sh 00:33:25.657 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:25.657 3023564 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:25.657 3024004 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:25.677 [Pipeline] sh 00:33:25.969 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:25.970 ++ grep -v 'sudo pgrep' 00:33:25.970 ++ awk '{print $1}' 00:33:25.970 + sudo kill -9 3023564 00:33:25.983 [Pipeline] sh 00:33:26.271 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:38.559 [Pipeline] sh 00:33:38.846 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:38.846 Artifacts sizes are good 00:33:38.861 [Pipeline] archiveArtifacts 00:33:38.868 Archiving artifacts 00:33:39.054 [Pipeline] sh 00:33:39.348 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:39.365 [Pipeline] cleanWs 00:33:39.377 [WS-CLEANUP] Deleting project workspace... 00:33:39.377 [WS-CLEANUP] Deferred wipeout is used... 00:33:39.385 [WS-CLEANUP] done 00:33:39.388 [Pipeline] } 00:33:39.408 [Pipeline] // catchError 00:33:39.421 [Pipeline] sh 00:33:39.711 + logger -p user.info -t JENKINS-CI 00:33:39.722 [Pipeline] } 00:33:39.739 [Pipeline] // stage 00:33:39.747 [Pipeline] } 00:33:39.766 [Pipeline] // node 00:33:39.772 [Pipeline] End of Pipeline 00:33:39.798 Finished: SUCCESS